Hasil Pencarian  ::  Simpan CSV :: Kembali

Hasil Pencarian

Ditemukan 172968 dokumen yang sesuai dengan query
cover
cover
Jurafsky, Dan
Upper Saddle River, N.J.: Pearson Education, 2009
410.285 JUR s
Buku Teks SO  Universitas Indonesia Library
cover
Vincent Sanjaya
"Penelitian ini membahas tentang pengembangan sistem Chatbot pada customer service bengkel motor dengan menggunakan algoritma cosine similarity. Cosine Similarity merupakan algoritma dengan basis dua vektor yang dihitung persamaannya berdasarkan sudut kedua vektor tersebut untuk mengukur tingkat kemiripan teks. Masukan sistem berupa percakapan teks yang pada proses selanjutnya diubah menjadi vektor dengan besar nilai vektor mengikuti dataset yang ada menggunakan metode Bag Of Words dengan dataset untuk membalas percakapan tersebut. Kemiripan suatu teks menggunakan akurasi dari perhitungan cosine similarity dengan akurasi sebesar 82.7%. Diamati faktor-faktor yang mempengaruhi akurasi setiap pengguna. Dalam penelitian ini, sistem menggunakan dataset sebesar 472 data katalog sepeda motor.

This paper discusses the development of a chatbot system in a motorcycle garage using the cosine similarity algorithm. Cosine similarity is an algorithm to calculate the degree of similarity of two vectors based on the value of the angle between the two vectors. The chatbot receives an input consisting of a sentence which is then converted into a vector using the Bag of Words algorithm. Using the cosine similarity algorithm, an accuracy of 82.7% is achieved. This paper utilizes 472 motorcycle catalogues as a dataset to perform the calculation and prediction previously mentioned."
Depok: Fakultas Teknik Universitas Indonesia, 2020
S-pdf
UI - Skripsi Membership  Universitas Indonesia Library
cover
Arief Saferman
"

Selama masa pandemi COVID-19, teknologi Automatic Speech Recognition (ASR) menjadi salah satu fitur yang sering digunakan pada komputer untuk mencatat di kelas online secara realtime. Teknologi ini akan bekerja dimana setiap suara yang muncul akan langsung dikenali dan dicatat pada halaman terminal. Dalam penelitian ini, model ASR Wav2Letter akan digunakan menggunakan CNN (Convolution Neural Network) dengan loss function CTC (Connectionist Temporal Classification) dan ASG (Auto Segmentation Criterion). Selama proses pembuatannya, berbagai hyperparameter acoustic model dan language model dari model ASR Wav2Letter terkait dengan implementasi batch normalization¸ learning-rate, window type, window size, n-gram language model, dan konten language model diuji pengaruh variasinya terhadap performa model Wav2Letter. Dari pengujian tersebut, ditemukan bahwa model ASR Wav2Letter menunjukkan performa paling baik ketika acoustic model menggunakan metode ASG dengan learning-rate 9 × 10−5 , window size 0.1, window type Blackman, serta 6-gram language model. Berdasarkan hasil akurasi WER CTC unggul 1,2% dengan 40,36% berbanding 42,11% dibandingkan ASG, namun jika dilihat lamanya epoch dan ukuran file model, loss function ASG memiliki keunggulan hampir dua kalinya CTC, dimana ASG hanya membutuhkan setengah dari jumlah epoch yang dibutuhkan oleh CTC yakni 24 epoch berbanding dengan 12 epoch dan ukuran file model ASG setengah lebih kecil dibandingkan CTC yakni 855,2 MB berbanding dengan 427,8 MB. Pada pengujian terakhir, model ASR Wav2Letter dengan loss function ASG mendapatkan hasil terbaik dengan nilai WER 29,30%. Berdasarkan hasil tersebut, model ASR Wav2Letter dengan loss function ASG menunjukkan perfoma yang lebih baik dibandingkan dengan CTC.


During the COVID-19 pandemic, Automatic Speech Recognition technology (ASR) became one of features that most widely used in computer to note down online class in real-time. This technology works by writing down every word in terminal from voice that is recognized by the system. ASR Wav2Letter model will use CNN (Convolutional Neural Network) with loss function CTC (Connectionist Temporal Classification) and ASG (Auto Segmentation Criterion). While developing Wav2Letter, various hyperparameter from acoustic model and language model is implemented such as batch normalization, learning rate, window type, window size, n-gram language model, and the content of language model are examined against the performance of Wav2Letter model. Based on those examination, Wav2Letter shows best performance when it uses ASG loss function learning rate 9 × 10−5 , window size 0.1, window type Blackman, and 6-gram language model. With that configuration, WER of CTC outplay ASG around 1.2% with 40.36% compare to 42,11%, but another parameter shows ASG are way more superior than CTC with less time epoch training which are 24 epoch for CTC against 12 epoch for ASG and the size of memory model shows CTC has bigger size than ASG with 855.2 MB against 427.8 MB. In the last test, ASR Wav2Letter model with ASG loss function get the best WER value around 29.3%. Based on those results, ASR Wav2Letter Model shows its best performance with ASG loss function than CTC.

"
Depok: Fakultas Teknik Universitas Indonesia, 2022
S-Pdf
UI - Skripsi Membership  Universitas Indonesia Library
cover
Feka Angge Pramita
"Penerapan floortime diberikan kepada anak laki~laki berusia 3,9 tahun yang mengalami keterlambatan perkembangan bahasa-bicara, yaitu perkcmhangan bahasa-bicam anak yang mengalami `hamba1an yang tidak sesuai dengan perkcmbangan anak seusianya. Floortime memfokuskan pada meningkatkan kemampuan berinisiatif dan berinteraksi dua arah dengan ibu. Penerapan scsi Floortime berlangsung selama satu bulan dilakukan dalam rangkaian empat sesi pre-rest, delapan belas scsi intervensi, dan dua scsi post rest untuk melihat perubahan interaksi antara anak dengan ibu. Setelah scsi intervensi, diperoleh hasil bahwa anak lebih banyak melalcukan inisiatif dan dapat melakukan interaksi timbai balik dengan ibn. Ibu terlihat lebih memahami anak dan memberikan anak kesempatan untuk mengekspresikan dirinya. Beberapa saran yang dapat diberikan antara lain: memperbanyak waktu melakukan Floortime setiap hari, menerapkan prinsip dasar floortime di luar waktu scsi floortime dan mulai mclibatkan anggota keluarga lain untuk melakukan sesi Floortime.

Floortime was given for a 3.9 year old boy with Developmental Delay on Speech and Language area, whom delay on speech and language developmental. The focus of Floortime are increase tha ability to initiate and two-way interaction with this caregivers. The Floortime treatment carried out for one month and consists of four sessions of pre»test assessment, eighteen of Floortime treatment, and two sessions of post-test intervention. Posttest session was held to see the change of interaction between the mother and the child. Aher the end of the treatment, the child become more initiate to mother and can do reciprocal interaction with her. Mother become understand the child better and give him an opportunity to express him self. Some suggestion : do Floortime daily, do Floortime outside the session and begin to involve other family members doing Floorime."
Depok: Fakultas Psikologi Universitas Indonesia, 2009
T34098
UI - Tesis Open  Universitas Indonesia Library
cover
Coleman, John R.
New York: Cambridge University Press, 2005
410.285 COL i
Buku Teks SO  Universitas Indonesia Library
cover
Krulee, Gilbert K.
Englewood Cliffs, NJ: Prentice-Hall, 1991
004.3 KRU c
Buku Teks SO  Universitas Indonesia Library
cover
Boca Raton: CRC Press, 2010
006.35 HAN
Buku Teks SO  Universitas Indonesia Library
cover
Mary, Leena
"Extraction and representation of prosodic features for speech processing applications deals with prosody from speech processing point of view with topics including, the significance of prosody for speech processing applications, why prosody need to be incorporated in speech processing applications, and different methods for extraction and representation of prosody for applications such as speech synthesis, speaker recognition, language recognition and speech recognition."
New York: Springer, 2012
e20418411
eBooks  Universitas Indonesia Library
cover
Petrov, Slav
"This book develops a general coarse-to-fine framework for learning and inference in large statistical models for natural language processing.
Coarse-to-fine approaches exploit a sequence of models which introduce complexity gradually. At the top of the sequence is a trivial model in which learning and inference are both cheap. Each subsequent model refines the previous one, until a final, full-complexity model is reached. Applications of this framework to syntactic parsing, speech recognition and machine translation are presented, demonstrating the effectiveness of the approach in terms of accuracy and speed. "
Berlin: Springer, 2012
e20399710
eBooks  Universitas Indonesia Library
<<   1 2 3 4 5 6 7 8 9 10   >>