Hasil Pencarian  ::  Simpan CSV :: Kembali

Hasil Pencarian

Ditemukan 1825 dokumen yang sesuai dengan query
cover
Adam Liwo
Gdansk : TASK , 2018
600 SBAG 22:4 (2018)
Artikel Jurnal  Universitas Indonesia Library
cover
Yosua Krisnando Bagaskara
"Live blog merupakan suatu media dari situs berita yang menyajikan update-update tentang suatu topik yang sedang terjadi. Update-update tersebut disampaikan secara rutin setiap hari. Suatu artikel live blog bisa berhalaman-halaman. Metode live blog summarization sudah pernah dilakukan, dan menghasilkan summary yang layak. Namun, metode tersebut belum layak dijadikan aplikasi berskala besar. Ini dikarenakan metode tersebut dirancang untuk sistem tersentralisasi, sehingga tidak ada distribusi terhadap data-data tersebut. Ini berpengaruh pada performa sistem. Penelitian ini bertujuan untuk mengembangkan konsep summarization platform yang terdistribusi bernama ReBlogSum, mulai dari merancang arsitektur terdistribusi, sampai tes dan simulasi pada arsitektur tersebut dibandingkan dengan arsitektur tersentralisasi. Penelitian ini diharapkan dapat memberikan gambaran mengenai ReBlogSum dan sistem terdistribusi pada proses live blog summarization.

Live blog is a kind of media from a news site which serves updates about an ongoing topic. Those updates are delivered routinely every day. A live blog article can have many pages. The live blog summarization method has been provided, an produced a decent summary. However, the method is not feasible to be made a large-scale application. It is because the method was designed to run on centralized system. Therefore, it lacks the distribution towards the data. It affects the system performance. This research aims to develop the novel distributed summarization platform called ReBlogSum, beginning from designing the distributed architecture, until the testing and simulation to the system compared to the centralized system. This research is expected to give the insight about ReBlogSum and distributed system onto the live blog summarization process."
Depok: Fakultas Ilmu Komputer Universitas Indonesia, 2022
S-pdf
UI - Skripsi Membership  Universitas Indonesia Library
cover
Siegel, Howard Jay
New York: McGraw-Hill, 1992
001.64 SIE i
Buku Teks SO  Universitas Indonesia Library
cover
Robertus Hudi
"Improvement in this experiment are done for 3 following factors: running time, memory efficiency, and speedup. The speedup result achieved is as close as 100× increase. Naïve parallelization is used on mapping each matrices data to CUDA memories, for each major operation is done in parallel behavior via self-made CUDA kernels to suits the data dimensions. This make up the improvement of 2nd factor, which is memory efficiency. Results for kernels are captured with NVIDIA profiling tools for the increasing number of random targets on 4 transmitter-receiver (PV) combinations (without any knowledge about the approximation of targets direction). All results are taken according to the average running time of kernel calls and speed up for each size of the input, compared with serial and CPU parallel version data of the previous work. Among advanced techniques for the passive radar system’s target association, several experiments have been done based on Probability Hypothetic Density (PHD) function. The complex calculation makes the computation processes a very demanding task to be done, thus, this paper is focused on PHD function performance comparison between preceding attempts to the implementation using a pure C programming language with CUDA library. A further improvement is highly possible within algorithm optimization itself or applying more advanced parallelization technique.

Peningkatan yang dilakukan pada eksperimen ini meliputi 3 faktor: running time, memory efficiency, dan speedup. Hasil pengujian speedup yang diperoleh mencapai setidaknya 100x peningkatan daripada algoritma semula. Paralelisasi naif yang digunakan untuk memetakan setiap matriks data ke dalam memori CUDA, untuk setiap operasi major dilakukan secara paralel dengan CUDA kernel yang didesain mandiri sehingga dapat menyesuaikan secara otomatis dengan dimensi data yang digunakan. Hal ini memungkinkan peningkatan pada faktor yang kedua yaitu memory efficiency. Hasil dari masing-masing kernel diukur menggunakan data yang diambil dari NVIDIA profiling tools untuk data acak yang meningkat dari segi ukuran, dan diimplementasikan pada 4 kombinasi transmitter-reveiver (PV) tanpa mengetahui aproksimasi arah target. Seluruh hasil pengujian kernel diambil berdasarkan rata-rata running time dari pemanggilan kernel dan speed up dari setiap ukuran masukan, dibandingkan dengan implementasi asosiasi target secara serial dan versi paralel pada CPU dari penelitian terdahulu. Diantara teknik tingkat lanjut yang digunakan untuk menentukan asosiasi target pada sistem radar pasif, beberapa percobaan telah dilakukan berdasarkan fungsi Probability Hypothetic Density (PHD). Kalkulasi yang kompleks menghasilkan proses komputasi yang terlalu berat untuk dilakukan, maka dari itu, percobaan ini fokus kepada komparasi performa fungsi PHD antara penelitian-penelitian terdahulu dengan impleentasi fungsi tersebut pada pustaka CUDA menggunakan bahasa pemrograman C. Peningkatan lebih lanjut sangat dimungkinkan melalui optimisasi algoritma PHD sendiri atau menggunakan teknik paralelisasi yang lebih baik.
"
Depok: Fakultas Ilmu Komputer Universitas Indonesia, 2020
T-pdf
UI - Tesis Membership  Universitas Indonesia Library
cover
"This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future."
Berlin: Springer-Verlag, 2012
e20398649
eBooks  Universitas Indonesia Library
cover
Holger Brunst, editor
"The proceedings of the 5th International Workshop on Parallel Tools for High Performance Computing provide an overview on supportive software tools and environments in the fields of system management, parallel debugging and performance analysis. In the pursuit to maintain exponential growth for the performance of high performance computers the HPC community is currently targeting exascale systems. The initial planning for exascale already started when the first petaflop system was delivered. Many challenges need to be addressed to reach the necessary performance. Scalability, energy efficiency and fault-tolerance need to be increased by orders of magnitude. The goal can only be achieved when advanced hardware is combined with a suitable software stack. In fact, the importance of software is rapidly growing. As a result, many international projects focus on the necessary software.
"
Berlin: Springer, 2012
e20406453
eBooks  Universitas Indonesia Library
cover
Uchiyama, Kunio
"This book defines the heterogeneous multicore architecture and explains in detail several embedded processor cores including CPU cores and special-purpose processor cores that achieve highly arithmetic-level parallelism. The authors developed three multicore chips (called RP-1, RP-2, and RP-X) according to the defined architecture with the introduced processor cores. The chip implementations, software environments, and applications running on the chips are also explained in the book.
Provides readers an overview and practical discussion of heterogeneous multicore technologies from both a hardware and software point of view. Discusses a new, high-performance and energy efficient approach to designing SoCs for digitally converged, embedded systems. Covers hardware issues such as architecture and chip implementation, as well as software issues such as compilers, operating systems, and application programs. Describes three chips developed according to the defined heterogeneous multicore architecture, including chip implementations, software environments, and working applications.
"
New York : Springer, 2012
e20425837
eBooks  Universitas Indonesia Library
cover
Sage, Andrew P.
New York: McGraw-Hill, 1977
620.7 SAG m (1)
Buku Teks  Universitas Indonesia Library
cover
"This book presents a language integrated query framework for big data. The continuous, rapid growth of data information to volumes of up to terabytes (1,024 gigabytes) or petabytes (1,048,576 gigabytes) means that the need for a system to manage and query information from large scale data sources is becoming more urgent. Currently available frameworks and methodologies are limited in terms of efficiency and querying compatibility between data sources due to the differences in information storage structures. For this research, the authors designed and programmed a framework based on the fundamentals of language integrated query to query existing data sources without the process of data restructuring. A web portal for the framework was also built to enable users to query protein data from the Protein Data Bank (PDB) and implement it on Microsoft Azure, a cloud computing environment known for its reliability, vast computing resources and cost-effectiveness."
Switzerland: Springer Nature, 2019
e20509153
eBooks  Universitas Indonesia Library
cover
Mahmoud, Magdi S.
Oxford: Pergamon Press, 1981
003 MAH l (1);003 MAH l (2)
Buku Teks  Universitas Indonesia Library
<<   1 2 3 4 5 6 7 8 9 10   >>