dc.contributor.author | Wanzare, Lilian | |
dc.contributor.author | Okutoyi, Joel | |
dc.contributor.author | Kang'ahi, Maurine | |
dc.contributor.author | Ayere, Mildred | |
dc.date.accessioned | 2024-11-11T15:48:04Z | |
dc.date.available | 2024-11-11T15:48:04Z | |
dc.date.issued | 2024-10-23 | |
dc.identifier.uri | https://repository.maseno.ac.ke/handle/123456789/6210 | |
dc.description.abstract | Kenyan Sign Language (KSL) is the primary language used by the deaf community in Kenya. It is the medium of instruction from Pre-primary 1 to university among deaf learners, facilitating their education and academic achievement. Kenyan Sign Language is used for social interaction, expression of needs, making requests and general communication among persons who are deaf in Kenya. However, there exists a language barrier between the deaf and the hearing people in Kenya. Thus, the innovation on AI4KSL is key in eliminating the communication barrier. Artificial intelligence for KSL is a two-year research project (2023-2024) that aims to create a digital open-access AI of spontaneous and elicited data from a representative sample of the Kenyan deaf community. The purpose of this study is to develop AI assistive technology dataset that translates English to KSL as a way of fostering inclusion and bridging language barriers among deaf learners in Kenya. Specific objectives are: Build KSL dataset for spoken English and video recorded Kenyan Sign Language and to build transcriptions of the KSL signs to a phonetic-level interface of the sign language. In this paper, the methodology for building the dataset is described. Data was collected from 48 teachers and tutors of the deaf learners and 400 learners who are Deaf. Participants engaged mainly in sign language elicitation tasks through reading and singing. Findings of the dataset consisted of about 14,000 English sentences with corresponding KSL Gloss derived from a pool of about 4000 words and about 20,000 signed KSL videos that are either signed words or sentences. The second level of data outcomes consisted of 10,000 split and segmented KSL videos. The third outcome of the dataset consists of 4,000 transcribed words into five articulatory parameters according to HamNoSys system. | en_US |
dc.publisher | arXiv preprint arXiv | en_US |
dc.subject | Kenyan sign language Sign Language, AI4KSL, inclusive education, transcription, lexical database, language contact, language change | en_US |
dc.title | Kenyan Sign Language (KSL) Dataset: Using Artificial Intelligence (AI) in Bridging Communication Barrier among the Deaf Learners | en_US |
dc.type | Article | en_US |