Hasil Pencarian  ::  Simpan CSV :: Kembali

Hasil Pencarian

Ditemukan 105 dokumen yang sesuai dengan query
cover
New York: Academic Press, 1982
004.35 PAR
Buku Teks  Universitas Indonesia Library
cover
Jesper Larsson Traff, editor
Abstrak :
This book constitutes the refereed proceedings of the 19th European MPI Users' Group Meeting, EuroMPI 2012, Vienna, Austria, September 23-26, 2012. The 29 revised papers presented together with 4 invited talks and 7 poster papers were carefully reviewed and selected from 47 submissions. The papers are organized in topical sections on MPI implementation techniques and issues, benchmarking and performance analysis, programming models and new architectures, run-time support, fault-tolerance, message-passing algorithms, message-passing applications, IMUDI, improving MPI user and developer interaction.
Berlin: Springer-Verlag, 2012
e20407883
eBooks  Universitas Indonesia Library
cover
Eko Sediyono
Abstrak :
ABSTRAK
Pengurutan merupakan proses penting yang banyak digunakan untuk membantu pembuatan laporan sehingga diperoleh data urut dan mudah dibaca, disamping itu juga digunakan sebagai sarana (tools) untuk eksekusi algoritme yang lebih kompleks.

Kebutuhan pengolahan data dan informasi yang lebih cepat semakin dirasakan perlu. Penggunaan prosesor cepat pun kadang-kadang masih belum cukup. Untuk memenuhi kebutuhan tersebut, implementasi pada komputer paralel dilakukan.

Tesis ini bertujuan untuk mengkaji implementasi pengurutan eksternal paralel pada jaringan komputer dengan sarana perangkat lunak PVM (Parallel Virtual Machine). Keuntungan implementasi pada PVM adalah tidak perlu mengadakan perangkat keras paralel, karena PVM mampu memanfaatkan jaringan komputer heterogen yang sudah ada sebagai suatu sistem komputer paralel. Jaringan komputer yang dipakai terdiri dari lima stasiun kerja SUN SPARC Station 1+ yang dihubungkan melalui protokol TCP/IP Ethernet dengan topologi jaringan bus.

Lambatnya message passing pada jaringan komputer yang dipakai berhasil dikurangi pengaruhnya dengan mengatur ukuran paket yang dikirim. Keserialan jalur I/O diatasi dengan menghubungkan tiap prosesor dengan satu cakramnya sendiri, sehingga akses bersama terhadap satu cakram dikurang i. Dengan perbaikan tersebut, speedup maksimum yang diperoleh dengan konfigurasi lima prosesor dan variasi data antara 5000 sampai dengan 25000 rekor adalah 3,6 dan efisiensinya 71,12 %. Data yang digunakan berstruktur rekor, yang terdiri dari tiga field alpha numerik dengan panjang 50 bytes/rekor.
1994
T-Pdf
UI - Tesis Membership  Universitas Indonesia Library
cover
Gianinna Ardaneswari
Abstrak :
Dalam bioinformatika penelusuran basis data sekuens digunakan untuk mencari kemiripan antara sebuah sekuens dengan sekuens lainnya pada suatu basis data sekuens Salah satu algoritma untuk menghitung skor kemiripan yang optimal adalah algoritma Smith Waterman yang menggunakan pemrograman dinamik Algoritma ini memiliki kompleksitas waktu kuadratik yaitu O n2 sehingga untuk data yang berukuran besar membutuhkan waktu komputasi yang lama Komputasi paralel diperlukan dalam penelusuran basis data sekuens ini agar waktu yang dibutuhkan lebih cepat dan memiliki kinerja yang baik Dalam skripsi ini akan dibahas implementasi paralel untuk algoritma Smith Waterman menggunakan bahasa pemrograman CUDA C pada GPU dengan NVCC compiler pada Linux Selanjutnya dilakukan analisis kinerja untuk beberapa model paralelisasi tersebut yaitu Inter task Parallelization Intra task Parallelization dan gabungan keduanya Berdasarkan hasil simulasi yang dilakukan paralelisasi dengan gabungan kedua model menghasilkan kinerja yang lebih baik dari model lainnya Paralelisasi dengan model gabungan menghasilkan rata rata speed up sebesar 313x dan rata rata efisiensi sebesar 0 93 ......In bioinformatics sequence database searches are applied to find the similarity between a sequence with other sequences in a sequence database One of the algorithms to compute the optimal similarity score is Smith Waterman algorithm that uses dynamic programming This algorithm has a quadratic time complexity O n2 which requires a long computation time for large sized data In this occasion parallel computing is essential to solve this sequence database searches in order to reduce the running time and to increase the performance In this mini thesis we discuss the parallel implementation of Smith Waterman algorithm using CUDA C programming language with NVCC compiler on Linux Furthermore we run the performance analysis using three parallelization models including Inter task Parallelization Intra task Parallelization and a combination of both models Based on the simulation results a combination of both models has better performance than the others In addition parallelization using combination of both models achieves an average speed up of 313x and an average efficiency with a factor of 0 93
Depok: Fakultas Matematika dan Ilmu Pengetahuan Alam Universitas Indonesia, 2013
S52395
UI - Skripsi Membership  Universitas Indonesia Library
cover
Abstrak :
This refereed volume arose from the editors' recognition that physical scientists, engineers, and applied mathematicians are developing, in parallel, solutions to problems of parallelization. The cross-disciplinary field of scientific computation is bringing about better communication between heterogeneous computational groups, as they face this common challenge. This volume is one attempt to provide cross-disciplinary communication. Problem decomposition and the use of domain-based parallelism in computational science and engineering was the subject addressed at a workshop held at the University of Minnesota Supercomputer Institute in April 1994. The authors were subsequently able to address the relationships between their individual applications and independently developed approaches. This book is written for an interdisciplinary audience and concentrates on transferable algorithmic techniques, rather than the scientific results themselves. Cross-disciplinary editing was employed to identify jargon that needed further explanation and to ensure provision of a brief scientific background for each chapter at a tutorial level so that the physical significance of the variables is clear and correspondences between fields are visible.
Philadelphia : Society for Industrial and Applied Mathematics, 1995
e20442880
eBooks  Universitas Indonesia Library
cover
Abstrak :
Scientific computing has often been called the third approach to scientific discovery, emerging as a peer to experimentation and theory. Historically, the synergy between experimentation and theory has been well understood: experiments give insight into possible theories, theories inspire experiments, experiments reinforce or invalidate theories, and so on. As scientific computing has evolved to produce results that meet or exceed the quality of experimental and theoretical results, it has become indispensable. Parallel processing has been an enabling technology in scientific computing for more than 20 years. This book is the first in-depth discussion of parallel computing in 10 years; it reflects the mix of topics that mathematicians, computer scientists, and computational scientists focus on to make parallel processing effective for scientific problems. Presently, the impact of parallel processing on scientific computing varies greatly across disciplines, but it plays a vital role in most problem domains and is absolutely essential in many of them. Parallel Processing for Scientific Computing is divided into four parts: The first concerns performance modeling, analysis, and optimization; the second focuses on parallel algorithms and software for an array of problems common to many modeling and simulation applications; the third emphasizes tools and environments that can ease and enhance the process of application development; and the fourth provides a sampling of applications that require parallel computing for scaling to solve larger and realistic models that can advance science and engineering.
Philadelphia: Society for Industrial and Applied Mathematics, 2006
e20443179
eBooks  Universitas Indonesia Library
cover
Heru Suhartanto
Abstrak :
Banyak model fenomena alam, aplikasi engineering, dan industri membutuhkan Sumber Daya Komputasi (SDK) yang tinggi untuk memroses data sehingga menghasilkan informasi yang dibutuhkan. Teknologi komputasi tingkat tinggi pun diperkenalkan banyak peneliti dengan diciptakannya Supercomputer beserta Operating System dan perangkatbantu (tools) pengembangnya seperti kompilator dan pustaka (library). Namun, mahalnya investasi SDK ini baik dalam pengadaan maupun pemeliharaannya memberatkan banyak pihak, sehingga diperlukan alternatif SDK yang tetap berkinerja tinggi tetapi murah. Untuk mengatasi keterbatasan tersebut, para peneliti telah membuat konsep alternatif, yakni konsep komputasi parallel pada jaringan komputer yang sudah ada. Banyak perangkatbantu diciptakan guna mengembangkan aplikasi dalam sistem SDK yang memanfaatkan mesin atau komputer dalam suatu jaringan, dimana masing-masing komputer ini berperan sebagai pemroses layaknya pemroses dalam sistem super computer. Tulisan ini akan mengkaji beberapa perangkat bantu yang cukup dominan di kalangan pemakai, yakni Parallel Virtual Machine (PVM), Message Passing Interface (MPI), Java Remote Method Invocation (RMI), serta Java Common Object Request Broker Architecture (CORBA) dan menyajikan eksperimen untuk mengetahui perangkatbantu mana yang paling cocok sehingga dapat pembantu calon user dalam memilihnya. Percobaan dilakukan pada SDK berbasis jaringan komputer pribadi (Personal Computer) dan menghasilkan percepatan yang cukup berarti. Dari keempat perangkatbantu tersebut masing-masing teridentifikasi cocok untuk pengembangan pada kondisi tertentu.
A Study on Parallel Computation Tools on Networked PCs. Many models for natural phenomena, engineering applications and industries need powerfull computing resources to solve their problems. High Performance Computing resources were introduced by many researchers. This comes in the form of Supercomputers and with operating systems and tools for development such as parallel compiler and its library. However, these resources are expensive for the investation and maintenance, hence people need some alternatives. Many people then introduced parallel distributed computing by using available computing resource such as PCs. Each of these PCs is treated as a processors, hence the cluster of the PC behaves as Multiprocessors Computer. Many tools are developed for such purposes. This paper studies the peformance of the currently popular tools such as Parallel Virta\ual Machine (PVM), Message Passing Interface (MPI), Java Remote Method Invocation (RMI) and Java Common Object Request Broker Architecture (CORBA). Some experiments were conducted on a cluster of PCs, the results show significant speed up. Each of those tools are identified suitable for a certain implementation and programming purposes.
Depok: Lembaga Penelitian Universitas Indonesia, 2006
AJ-Pdf
Artikel Jurnal  Universitas Indonesia Library
cover
Gallivan, K.A.
Abstrak :
Describes a selection of important parallel algorithms for matrix computations. Reviews the current status and provides an overall perspective of parallel algorithms for solving problems arising in the major areas of numerical linear algebra, including (1) direct solution of dense, structured, or sparse linear systems, (2) dense or structured least squares computations, (3) dense or structured eigenvaluen and singular value computations, and (4) rapid elliptic solvers. The book emphasizes computational primitives whose efficient execution on parallel and vector computers is essential to obtain high performance algorithms. Consists of two comprehensive survey papers on important parallel algorithms for solving problems arising in the major areas of numerical linear algebra--direct solution of linear systems, least squares computations, eigenvalue and singular value computations, and rapid elliptic solvers, plus an extensive up-to-date bibliography (2,000 items) on related research.
Philadelphia : Society for Industrial and Applied Mathematics, 1990
e20442937
eBooks  Universitas Indonesia Library
cover
Lita Analistya Dipodiputro
Abstrak :
Skripsi ini membahas pengertian perdagangan dengan system impor parallel secara umum di dunia dan di Indonesia. Pembahasan ini merupakan hal penting karena praktik impor parallel belakangan marak dilakukan oleh banyak pihak dengan ketiadaan peraturan khusus yang mengatur mengenainya. Pembahasan dikhususkan pada impor parallel yang terkait dengan hukum kekayaan intelektual. Berkenaan dengan hal tersebut, terdapat suatu kasus antara PT. MODERN PHOTO TBK dan PT. INTERNATIONAL PHOTOGRAPHIC SUPPLIES/PD. STAR PHOTOGRAPHIC SUPPLIES, dimana telah terjadi praktik impor parallel untuk barang spesifik berupa rol film merek FUJI. Tipologi penulisan skripsi ini adalah penelitian kasus dengan pendekatan undang-undang dan pendekatan perbandingkan melalui 2(dua) buah putusan.
This mini thesis discusses the definition of commerce along with parallel import system as it is used throughout the world and in Indonesia.. The discusion about parallel import practices is extremely important because it is very popular, however, it lacks proper regulation. The discussion focuses on parallel import connected with intellectual property law. Related to the discussion, there is a case between PT. MODERN PHOTO TBK and PT. INTERNATIONAL PHOTOGRAPHIC SUPPLIES/PD. STAR PHOTOGRAPHIC SUPPLIES, which has occurred parallel import practice for a specific good which is a film roll with a brand of FUJI. The writ tipology of this mini thesis is case research, with approach of prevailing regulations and approach of comparison, through two jurisdictions.
Depok: Universitas Indonesia, 2009
S24738
UI - Skripsi Open  Universitas Indonesia Library
cover
Jamil Abdulhamid Mohammed Saif
Abstrak :
ABSTRAK
Image edge detection plays a crucial role in image analysis and computer vision, it is defined as the process of finding the boundaries between objects within the considered image. The recognized edges may further be used in object recognition or image matching. In this paper a Canny image edge detector is used which gives acceptable results that can be utilized in many disciplines, but this technique is time consuming especially when a big collection of images is analyzed. For that reason, to enhance the performance of the algorithms, a parallel platform allowing speeding up the computation is used. The scalability of a multicore supercomputer node, which is exploited to run the same routines for a collection of color images from 2100 to 42000 images is investigated.
TASK, 2017
600 SBAG 21:4 (2017)
Artikel Jurnal  Universitas Indonesia Library
<<   1 2 3 4 5 6 7 8 9 10   >>