Hasil Pencarian  ::  Simpan CSV :: Kembali

Hasil Pencarian

Ditemukan 184674 dokumen yang sesuai dengan query
cover
Diyanatul Husna
"ABSTRAK
Apache Hadoop merupakan framework open source yang mengimplementasikan MapReduce yang memiliki sifat scalable, reliable, dan fault tolerant. Scheduling merupakan proses penting dalam Hadoop MapReduce. Hal ini dikarenakan scheduler bertanggung jawab untuk mengalokasikan sumber daya untuk berbagai aplikasi yang berjalan berdasarkan kapasitas sumber daya, antrian, pekerjaan yang dijalankan, dan banyaknya pengguna. Pada penelitian ini dilakukan analisis terhadapap Capacity Scheduler dan Fair Scheduler. Pada saat Hadoop framework diberikan 1 pekerjaan dengan ukuran data set 1,03 GB dalam satu waktu. Waiting time yang dibutuhkan Capacity Scheduler dan Fair Scheduler adalah sama. Run time yang dibutuhkan Capacity Scheduler lebih cepat 6% dibandingkan Fair Scheduler pada single node. Sedangkan pada multi node Fair Scheduler lebih cepat 11% dibandingkan Capacity Scheduler. Pada saat Hadoop framework diberikan 3 pekerjaan secara bersamaan dengan ukuran data set (1,03 GB ) yang sama dalam satu waktu. Waiting time yang dibutuhkan Fair Scheduler lebih cepat dibandingkan Capacity Scheduler yaitu 87% lebih cepat pada single node dan 177% lebih cepat pada multi node. Run time yang dibutuhkan Capacity Scheduler lebih cepat dibandingkan Fair Scheduler yaitu 55% lebih cepat pada single node dan 212% lebih cepat pada multi node. Turnaround time yang dibutuhkan Fair Scheduler lebih cepat dibandingkan Capacity Scheduler yaitu 4% lebih cepat pada single node, sedangkan pada multi node Capacity Scheduler lebih cepat 58% dibandingkan Fair Scheduler. Pada saat Hadoop framework diberikan 3 pekerjaan secara bersamaan dengan ukuran data set yang berbeda dalam satu waktu yaitu data set 1 (456 MB), data set 2 (726 MB), dan data set 3 (1,03 GB) dijalankan secara bersamaan. Pada data set 3 (1,03 GB), waiting time yang dibutuhkan Fair Scheduler lebih cepat dibandingkan Capacity Scheduler yaitu 44% lebih cepat pada single node dan 1150% lebih cepat pada multi node. Run time yang dibutuhkan Capacity Scheduler lebih cepat dibandingkan Fair Scheduler yaitu 56% lebih cepat pada single node dan 38% lebih cepat pada multi node. Turnaround time yang dibutuhkan Capacity Scheduler lebih cepat dibandingkan Fair Scheduler yaitu 12% lebih cepat pada single node, sedangkan pada multi node Fair Scheduler lebih cepat 25,5% dibandingkan Capacity Scheduler

ABSTRACT
Apache Hadoop is an open source framework that implements MapReduce. It is scalable, reliable, and fault tolerant. Scheduling is an essential process in Hadoop MapReduce. It is because scheduling has responsibility to allocate resources for running applications based on resource capacity, queue, running tasks, and the number of user. This research will focus on analyzing Capacity Scheduler and Fair Scheduler. When hadoop framework is running single task. Capacity Scheduler and Fair Scheduler have the same waiting time. In data set 3 (1,03 GB), Capacity Scheduler needs faster run time than Fair Scheduler which is 6% faster in single node. While in multi node, Fair Scheduler is 11% faster than Capacity Scheduler. When hadoop framework is running 3 tasks simultaneously with the same data set (1,03 GB) at one time. Fair Scheduler needs faster waiting time than Capacity Scheduler which is 87% faster in single node and 177% faster in muliti node. Capacity Scheduler needs faster run time than Fair Scheduler which is 55% faster in single node and 212% faster in multi node. Fair Scheduler needs faster turnaround time than Capacity Scheduler which is 4% faster in single node, while in multi node Capacity Scheduler is 58% faster than Fair Scheduler. When hadoop framework is running 3 tasks simultaneously with different data set, which is data set 1 (456 MB), data set 2 (726 MB), and data set 3 (1,03 GB) in one time. In data set 3 (1,03 GB), Fair Scheduler needs faster waiting time than Capacity Scheduler which is 44% faster in single node and 1150% faster in muliti node. Capacity Scheduler needs faster run time than Fair Scheduler which is 56% faster in single node and 38% faster in multi node. Capacity Scheduler needs faster turnaround time than Fair Scheduler which is 12% faster in single node, while in multi node Fair Scheduler is 25,5% faster than Capacity Scheduler"
2016
T45854
UI - Tesis Membership  Universitas Indonesia Library
cover
White, Tom
"Get ready to unlock the power of your data. With the fourth edition of this comprehensive guide, you'll learn how to build and maintain reliable, scalable, distributed systems with Apache Hadoop. This book is ideal for programmers looking to analyze datasets of any size, and for administrators who want to set up and run Hadoop clusters.Using Hadoop 2 exclusively, author Tom White presents new chapters on YARN and several Hadoop-related projects such as Parquet, Flume, Crunch, and Spark. You'll learn about recent changes to Hadoop, and explore new case studies on Hadoop's role in healthcare systems and genomics data processing.Learn fundamental components such as MapReduce, HDFS, and YARNExplore MapReduce in depth, including steps for developing applications with itSet up and maintain a Hadoop cluster running HDFS and MapReduce on YARNLearn two data formats: Avro for data serialization and Parquet for nested dataUse data ingestion tools such as Flume (for streaming data) and Sqoop (for bulk data transfer)Understand how high-level data processing tools like Pig, Hive, Crunch, and Spark work with HadoopLearn the HBase distributed database and the ZooKeeper distributed configuration service."
Sebastopol, CA : O'Reilly Media , 2015
005.74 WHI h
Buku Teks SO  Universitas Indonesia Library
cover
Ishmah Naqiyya
"Perkembangan teknologi informasi dan internet dalam berbagai sektor kehidupan menyebabkan terjadinya peningkatan pertumbuhan data di dunia. Pertumbuhan data yang berjumlah besar ini memunculkan istilah baru yaitu Big Data. Karakteristik yang membedakan Big Data dengan data konvensional biasa adalah bahwa Big Data memiliki karakteristik volume, velocity, variety, value, dan veracity. Kehadiran Big Data dimanfaatkan oleh berbagai pihak melalui Big Data Analytics, contohnya Pelaku Usaha untuk meningkatkan kegiatan usahanya dalam hal memberikan insight yang lebih luas dan dalam. Namun potensi yang diberikan oleh Big Data ini juga memiliki risiko penggunaan yaitu pelanggaran privasi dan data pribadi seseorang. Risiko ini tercermin dari kasus penyalahgunaan data pribadi Pengguna Facebook oleh Cambridge Analytica yang berkaitan dengan 87 juta data Pengguna. Oleh karena itu perlu diketahui ketentuan perlindungan privasi dan data pribadi di Indonesia dan yang diatur dalam General Data Protection Regulation (GDPR) dan diaplikasikan dalam Big Data Analytics, serta penyelesaian kasus Cambridge Analytica-Facebook. Penelitian ini menggunakan metode yuridis normatif yang bersumber dari studi kepustakaan. Dalam Penelitian ini ditemukan bahwa perlindungan privasi dan data pribadi di Indonesia masih bersifat parsial dan sektoral berbeda dengan GDPR yang telah mengatur secara khusus dalam satu ketentuan. Big Data Analytics juga memiliki beberapa implikasi dengan prinsip perlindungan privasi dan data pribadi yang berlaku. Indonesia disarankan untuk segera mengesahkan ketentuan perlindungan privasi dan data pribadi khusus yang sampai saat ini masih berupa rancangan undang-undang.

The development of information technology and the internet in various sectors of life has led to an increase in data growth in the world. This huge amount of data growth gave rise to a new term, Big Data. The characteristic that distinguishes Big Data from conventional data is that Big Data has the characteristic of volume, velocity, variety, value, and veracity. The presence of Big Data is utilized by various parties through Big Data Analytics, for example for Corporation to incurease their business activities in terms of providing broader and deeper insight. But this potential provided by Big Data also comes with risks, which is violation of one's privacy and personal data. One of the most scandalous case of abuse of personal data is Cambridge Analytica-Facebook relating to 87 millions user data. Therefor it is necessary to know the provisions of privacy and personal data protection in Indonesia and which are regulated in the General Data Protection (GDPR) and how it applied in Big Data Analytics, as well as the settlement of the Cambridge Analytica-Facebook case. This study uses normative juridical methods sourced from library studies. In this study, it was found that the protection of privacy and personal data in Indonesia is still partial and sectoral which is different from GDPR that has specifically regulated in one bill. Big Data Analytics also has several implications with applicable privacy and personal data protection principles. Indonesia is advised to immediately ratify the provisions on protection of privacy and personal data which is now is still in the form of a RUU."
Depok: Fakultas Hukum Universitas Indonesia, 2020
S-pdf
UI - Skripsi Membership  Universitas Indonesia Library
cover
Febtriany
"Saat ini kompetisi di industri telekomunikasi semakin ketat. Perusahaan telekomunikasi yang dapat tetap menghasilkan banyak keuntungan yaitu perusahaan yang mampu menarik dan mempertahankan pelanggan di pasar yang sangat kompetitif dan semakin jenuh. Hal ini menyebabkan perubahan strategi banyak perusahaan telekomunikasi dari strategi 'growth '(ekspansi) menjadi 'value added services'. Oleh karena itu, program mempertahankan pelanggan ('customer retention') saat ini menjadi bagian penting dari strategi perusahaan telekomunikasi. Program tersebut diharapkan dapat menekan 'churn' 'rate 'atau tingkat perpindahan pelanggan ke layanan/produk yang disediakan oleh perusahaan kompetitor.
Program mempertahankan pelanggan ('customer retention') tersebut tentunya juga diimplementasikan oleh PT Telekomunikasi Indonesia, Tbk (Telkom) sebagai perusahaan telekomunikasi terbesar di Indonesia. Program tersebut diterapkan pada berbagai produk Telkom, salah satunya Indihome yang merupakan 'home services' berbasis 'subscriber' berupa layanan internet, telepon, dan TV interaktif. Melalui kajian ini, penulis akan menganalisa penyebab 'churn' pelanggan potensial produk Indihome tersebut, sehingga Telkom dapat meminimalisir angka 'churn' dengan melakukan program 'customer retention' melalui 'caring' yang tepat.
Mengingat ukuran 'database' pelanggan Indihome yang sangat besar, penulis akan menganalisis data pelanggan tersebut menggunakan metoda 'Big Data Analytics'. 'Big Data' merupakan salah satu metode pengelolaan data yang sangat besar dengan pemetaan dan 'processing' data. Melalui berbagai bentuk 'output', implementasi 'big data' pada perusahaan akan memberikan 'value' yang lebih baik dalam pengambilan keputusan berbasis data.

Nowadays, telecommunication industry is very competitive. Telecommunication companies that can make a lot of profit is the one who can attract and retain customers in this highly competitive and increasingly saturated market. This causes change of the strategy of telecommunication companies from growth strategy toward value added services. Therefore, customer retention program is becoming very important in telecommunication companies strategy. This program hopefully can reduce churn rate or loss of potential customers due to the shift of customers to other similar products.
Customer retention program also implemented by PT Telekomunikasi Indonesia, Tbk (Telkom) as the leading telecommunication company in Indonesia. Customer retention program implemented for many Telkom products, including Indihome, a home services based on subscriber which provide internet, phone, and interactive TV. Through this study, the authors will analyze the cause of churn potential customers Indihome product, so that Telkom can minimize the churn number by doing customer retention program through the efficient caring.
Given by huge customer database the author will analyze using Big Data analytics method. Big Data is one method in data management that contain huge data, by mapping and data processing. Through various forms of output, big data implementation on the organization will provide better value in data-based decision making.
"
Depok: Fakultas Ekonomi dan Bisnis Universitas Indonesia, 2018
T-Pdf
UI - Tesis Membership  Universitas Indonesia Library
cover
"Informasi telah menjadi komoditas berharga yang membawa pada perubahan pada kehidupan manusia. Salah satu perubahan adalah bagaimana manusia memperoleh informasi tersebut dari kepingan data yang sangat banyak. Kepingan data yang banyak tersebut merupakan big data membutuhkan tempat untuk disimpan di organisasi dan di analisa. Perpustakaan memiliki sejarah panjang sebagai tempat penyimpanan, pengorganisasian dan analisa informasi. Artikel ini berusaha memberikan gambaran umum tentang big data dan pengaruhnya terhadap dunia perpustakaan. Big data membawa pengaruh besar dalam dunia perpustakaan khususnya pada aspek layanan perpustakaan, kompetensi pustakawan."
MPMKAP 22:4 (2015)
Artikel Jurnal  Universitas Indonesia Library
cover
Haura Syarafa
"ABSTRAK
PT. Citilink Indonesia Citilink adalah maskapai penerbangan yang menerapkan konsep penerbangan berbiaya murah atau Low Cost Carrier LCC . Dalam menjalankan proses operasionalnya, Citilink memiliki masalah terkait efisiensi bahan bakar. Salah satu penyebab bahan bakar tidak efisien adalah tidak teridentifikasinya komponen mesin pesawat yang kinerjanya sudah tidak maksimal. Komponen pesawat yang tidak bekerja maksimal tidak dapat teridentifikasi dikarenakan data log pesawat yang terlalu banyak belum dapat diolah. Berangkat dari masalah tersebut, Citilink dituntut untuk dapat mengolah dan menganalisis data log pesawat tersebut beserta dengan data lainnya yang memiliki karakteristik Big Data Volume, Velocity, dan Variety . Namun agar penerapan Big Data berjalan dengan baik dan tepat sesuai dengan karakteristik perusahaan, maka diperlukan sebuah kerangka kerja penerapan Big Data serta memvalidasi kerangka kerja tersebut di Citilink. Metode yang dipakai untuk merancang rekomendasi kerangka kerja penerapan Big Data adalah metode penelitian Lakoju 2017 , Tianmei dan Baowen 2007 , dan Kitsios dan Kamariotou 2016 yang terdiri dari tahap rencana pemetaan, penilaian framework Big Data, perancangan framework Big Data, dan uji validasi. Hasil penelitian menunjukkan lima fase kerangka kerja penerapan Big Data Big Data Strategic Alignment, Team, Project Plan, Data Analytics, dan Implementation yang dikelilingi oleh tahap proses manajemen kinerja serta adanya keterlibatan proses manajemen perubahan. Setelah itu, kerangka kerja tersebut divalidasi kepada dua responden dari Citilink dan dilakukan enam perbaikan, yaitu penambahan cara menganalisis kondisi internal organisasi, pengkategorian posisi tim, penambahan rekomendasi data processing architecture, penambahan aktivitas pemeliharaan aplikasi Big Data, penambahan bagaimana cara menganalisis perubahan, dan penambahan aktivitas kontrol proyek.

ABSTRACT
PT. Citilink Indonesia Citilink is airline that implements Low Cost Carrier LCC . To perform its operational processes, Citilink has problems related to fuel efficiency. One of the causes inefficient fuel is unidentified damaged aircraft engine components. An unidentified damaged aircraft components because the data log plane is too much and can not be processed. From these problems Citilink also required to be able to process and analyze the log data of the aircraft and other data that has characteristics of Big Data Volume, Velocity, and Variety . But in order for Big Data implementation to run properly and appropriatley in accordance with the characteristics of the company, it would require a framework for implementation of Big Data and validate that framework in Citilink. The method used to design the recommendation of framework for implementation of Big Data is Lakoju 2017 , Tianmei and Baowen 2007 , and Kitsios and Kamariotou 2016 research method begins with the mapping plan stage, the Big Data framework assessment, the Big Data framework design, and the validation test. The results is the five phases in framework for implementation of Big Data Big Data Strategy Alignment, Team, Project Plan, Data Analytics and Implementation surrounded by performance management processes and the involvement of the change management process. After that, the framework has been validated to two respondents from Citilink and has been improvements with adding of how to analyze an organization 39;s internal conditions, categorizing position of the team, addition of a data processing architecture recommendations, additional Big Data maintenance aspects, addition of how to analyze changes, and addition of project control aspects."
2018
TA-Pdf
UI - Tugas Akhir  Universitas Indonesia Library
cover
Nico Juanto
"E-commerce dan big data merupakan bukti dari kemajuan teknologi yang sangat pesat. Big data berperan cukup penting dalam perusahaan e-commerce untuk menangani perkembangan semua data, mengolah setiap data tersebut dan menjadi competitive advantage bagi perusahaan. Perusahaan XYZ.com mengalami kesulitan dalam menganalisis stok dan tren dari produk yang dijual. Jika hal ini tidak ditanggulangi, maka perusahaan XYZ.com akan kehilangan opportunity gain. Untuk menentukan tren dan stok produk secara cepat dengan akurat, dibutuhkan big data predictive analysis. Penelitian ini mengolah data transaksi menjadi data yang dapat dianalisis untuk menentukan tren dan prediksi tren produk berdasarkan kategorinya dengan menggunakan big data predictive analysis. Hasil dari penelitian ini akan memberikan informasi kepada pihak manajemen kategori apa yang berpotensi menjadi tren dan jumlah minimal stok yang harus disediakan dari kategori produk tersebut.

E commerce and big data are evidence of rapid technological advances. Big data plays an important role in e commerce companies to handle and analyze all data changes, and become a competitive advantage for the company. XYZ.com experience a difficulty in analyzing stocks and commerce product trend. If this issue not addressed, XYZ.com company will lose an opportunity gain. To determine trends and stock accurately, XYZ.com can use big data predictive analysis. This study processes transaction data into data that can be analyzed to determine trends and predictions of product trends based on its categories using big data predictive analysis. The results of this study give massive informations to management about what categories will potential become trends and minimum stock required to be provided."
Depok: Fakultas Ekonomi dan Bisnis Universitas Indonesia, 2017
T-Pdf
UI - Tesis Membership  Universitas Indonesia Library
cover
Panji Winata
"[ABSTRAK
PT. XYZ merupakan perusahaan telekomunikasi di Indonesia yang sedang
berusaha mentransformasikan bisnisnya menuju layanan broadband dan bisnis
digital. Banyak peluang bisnis di layanan broadband dan bisnis digital yang dapat
diidentifikasi dengan memproses dan menganalisis data dengan cepat, tepat, dan
menyeluruh. Saat ini PT. XYZ telah memiliki kemampuan dalam mengolah
beberapa sumber data yang terstruktur dengan ukuran data yang terbatas. Untuk
membuat perhitungan dan keputusan yang jitu, terutama di layanan broadband dan
bisnis digital, PT. XYZ dituntut juga untuk bisa memproses dan menganalisis data
yang memiliki karakteristik 3V (Velocity, Volume, Variety) atau dikenal dengan big
data. Penelitian ini bertujuan untuk merancang arsitektur sistem pemrosesan big
data di PT. XYZ. Kerangka arsitektur (framework) enteprise yang digunakan dalam
penelitian ini adalah TOGAF. Hasil yang diperoleh dari penelitian ini adalah
rancangan arsitektur sistem pemrosesan big data yang mampu mengolah data yang
memiliki karakteristik 3V, yaitu aliran data yang cepat, berukuran masiv, dan
beranekaragam (terstruktur maupun tidak terstruktur) dengan biaya lebih rendah
dari sistem pemrosesan data yang dimiliki PT. XYZ saat ini. Saran untuk penelitian
ini kedepannya adalah sistem pemrosesan big data di PT. XYZ dapat
diimplementasikan dengan baik jika mendapat dukungan penuh dari manajemen
perusahaan, dimulai dengan kasus bisnis yang spesifik (specific business case) yang
ingin disasar. Hasil yang maksimal dari kasus bisnis tersebut dapat dijadikan
landasan untuk investasi sistem pemrosesan big data yang lebih menyeluruh dalam
mendukung transformasi bisnis menuju layanan broadband dan bisnis digital.

ABSTRACT
PT. XYZ is a telecommunication company in Indonesia which is transforming it's business to broadband services & digital business. Many business opportunities in broadband services & digital business can be identified by processing and analyzing data quickly, accurately, and completely. Right now PT. XYZ has the capability in processing some structured data sources with limited data size. To make accurate calculations and decisions, especially in broadband services and digital business, PT. XYZ also required to be able to process and analyze the data that has the characteristics of 3V (Velocity, Volume, Variety) or known as big data. This research aims to design the architecture of big data processing system. The enterprise architecture framework used in this study is TOGAF. The results obtained from this study is the design of big data processing system architecture that is capable of processing data which has the characteristics of 3V (the fast data
flow, massive data size, and diverse structured or unstructured data sources) at a lower cost than the current data processing system in PT. XYZ. The suggestion about this study is the big data processing system can be implemented properly in PT. XYZ with the full support of the PT. XYZ management, started with a specific business use case that want targeted. The maximum results from the business use case can be used as a piloting for big data processing system investments more
thorough in supporting business transformation toward broadband services and digital business. ;PT. XYZ is a telecommunication company in Indonesia which is transforming it?s
business to broadband services & digital business. Many business opportunities in
broadband services & digital business can be identified by processing and analyzing
data quickly, accurately, and completely. Right now PT. XYZ has the capability in
processing some structured data sources with limited data size. To make accurate
calculations and decisions, especially in broadband services and digital business,
PT. XYZ also required to be able to process and analyze the data that has the
characteristics of 3V (Velocity, Volume, Variety) or known as big data. This
research aims to design the architecture of big data processing system. The
enterprise architecture framework used in this study is TOGAF. The results
obtained from this study is the design of big data processing system architecture
that is capable of processing data which has the characteristics of 3V (the fast data
flow, massive data size, and diverse structured or unstructured data sources) at a
lower cost than the current data processing system in PT. XYZ. The suggestion
about this study is the big data processing system can be implemented properly in
PT. XYZ with the full support of the PT. XYZ management, started with a specific
business use case that want targeted. The maximum results from the business use
case can be used as a piloting for big data processing system investments more
thorough in supporting business transformation toward broadband services and
digital business. , PT. XYZ is a telecommunication company in Indonesia which is transforming it’s
business to broadband services & digital business. Many business opportunities in
broadband services & digital business can be identified by processing and analyzing
data quickly, accurately, and completely. Right now PT. XYZ has the capability in
processing some structured data sources with limited data size. To make accurate
calculations and decisions, especially in broadband services and digital business,
PT. XYZ also required to be able to process and analyze the data that has the
characteristics of 3V (Velocity, Volume, Variety) or known as big data. This
research aims to design the architecture of big data processing system. The
enterprise architecture framework used in this study is TOGAF. The results
obtained from this study is the design of big data processing system architecture
that is capable of processing data which has the characteristics of 3V (the fast data
flow, massive data size, and diverse structured or unstructured data sources) at a
lower cost than the current data processing system in PT. XYZ. The suggestion
about this study is the big data processing system can be implemented properly in
PT. XYZ with the full support of the PT. XYZ management, started with a specific
business use case that want targeted. The maximum results from the business use
case can be used as a piloting for big data processing system investments more
thorough in supporting business transformation toward broadband services and
digital business. ]"
2015
TA-Pdf
UI - Tugas Akhir  Universitas Indonesia Library
cover
Krishnan, Krish
Burlington: Elsevier Science, 2013
005.745 KRI d
Buku Teks SO  Universitas Indonesia Library
cover
"This book highlights the state of the art and recent advances in Big Data clustering methods and their innovative applications in contemporary AI-driven systems. The book chapters discuss Deep Learning for Clustering, Blockchain data clustering, Cybersecurity applications such as insider threat detection, scalable distributed clustering methods for massive volumes of data; clustering Big Data Streams such as streams generated by the confluence of Internet of Things, digital and mobile health, human-robot interaction, and social networks; Spark-based Big Data clustering using Particle Swarm Optimization; and Tensor-based clustering for Web graphs, sensor streams, and social networks. The chapters in the book include a balanced coverage of big data clustering theory, methods, tools, frameworks, applications, representation, visualization, and clustering validation. "
Switzerland: Springer Nature, 2019
e20507207
eBooks  Universitas Indonesia Library
<<   1 2 3 4 5 6 7 8 9 10   >>