arbisoft brand logo
arbisoft brand logo

A Technology Partnership That Goes Beyond Code

  • company logo

    “Arbisoft is an integral part of our team and we probably wouldn't be here today without them. Some of their team has worked with us for 5-8 years and we've built a trusted business relationship. We share successes together.”

    Jake Peters profile picture

    Jake Peters/CEO & Co-Founder, PayPerks

  • company logo

    “They delivered a high-quality product and their customer service was excellent. We’ve had other teams approach us, asking to use it for their own projects”.

    Alice Danon profile picture

    Alice Danon/Project Coordinator, World Bank

1000+Tech Experts

550+Projects Completed

50+Tech Stacks

100+Tech Partnerships

4Global Offices

4.9Clutch Rating

Trending Blogs

    81.8% NPS78% of our clients believe that Arbisoft is better than most other providers they have worked with.

    • Arbisoft is your one-stop shop when it comes to your eLearning needs. Our Ed-tech services are designed to improve the learning experience and simplify educational operations.

      Companies that we have worked with

      • MIT logo
      • edx logo
      • Philanthropy University logo
      • Ten Marks logo

      • company logo

        “Arbisoft has been a valued partner to edX since 2013. We work with their engineers day in and day out to advance the Open edX platform and support our learners across the world.”

        Ed Zarecor profile picture

        Ed Zarecor/Senior Director & Head of Engineering

    • Get cutting-edge travel tech solutions that cater to your users’ every need. We have been employing the latest technology to build custom travel solutions for our clients since 2007.

      Companies that we have worked with

      • Kayak logo
      • Travelliance logo
      • SastaTicket logo
      • Wanderu logo

      • company logo

        “Arbisoft has been my most trusted technology partner for now over 15 years. Arbisoft has very unique methods of recruiting and training, and the results demonstrate that. They have great teams, great positive attitudes and great communication.”

        Paul English profile picture

        Paul English/Co-Founder, KAYAK

    • As a long-time contributor to the healthcare industry, we have been at the forefront of developing custom healthcare technology solutions that have benefitted millions.

      Companies that we have worked with

      • eHuman logo
      • Reify Health logo

      • company logo

        I wanted to tell you how much I appreciate the work you and your team have been doing of all the overseas teams I've worked with, yours is the most communicative, most responsive and most talented.

        Matt Hasel profile picture

        Matt Hasel/Program Manager, eHuman

    • We take pride in meeting the most complex needs of our clients and developing stellar fintech solutions that deliver the greatest value in every aspect.

      Companies that we have worked with

      • Payperks logo
      • The World Bank logo
      • Lendaid logo

      • company logo

        “Arbisoft is an integral part of our team and we probably wouldn't be here today without them. Some of their team has worked with us for 5-8 years and we've built a trusted business relationship. We share successes together.”

        Jake Peters profile picture

        Jake Peters/CEO & Co-Founder, PayPerks

    • Unlock innovative solutions for your e-commerce business with Arbisoft’s seasoned workforce. Reach out to us with your needs and let’s get to work!

      Companies that we have worked with

      • HyperJar logo
      • Edited logo

      • company logo

        The development team at Arbisoft is very skilled and proactive. They communicate well, raise concerns when they think a development approach wont work and go out of their way to ensure client needs are met.

        Veronika Sonsev profile picture

        Veronika Sonsev/Co-Founder

    • Arbisoft is a holistic technology partner, adept at tailoring solutions that cater to business needs across industries. Partner with us to go from conception to completion!

      Companies that we have worked with

      • Indeed logo
      • Predict.io logo
      • Cerp logo
      • Wigo logo

      • company logo

        “The app has generated significant revenue and received industry awards, which is attributed to Arbisoft’s work. Team members are proactive, collaborative, and responsive”.

        Silvan Rath profile picture

        Silvan Rath/CEO, Predict.io

    • Software Development Outsourcing

      Building your software with our expert team.

    • Dedicated Teams

      Long term, integrated teams for your project success

    • IT Staff Augmentation

      Quick engagement to boost your team.

    • New Venture Partnership

      Collaborative launch for your business success.

    Discover More

    Hear From Our Clients

    • company logo

      “Arbisoft partnered with Travelliance (TVA) to develop Accounting, Reporting, & Operations solutions. We helped cut downtime to zero, providing 24/7 support, and making sure their database of 7 million users functions smoothly.”

      Dori Hotoran profile picture

      Dori Hotoran/Director Global Operations - Travelliance

    • company logo

      “I couldn’t be more pleased with the Arbisoft team. Their engineering product is top-notch, as is their client relations and account management. From the beginning, they felt like members of our own team—true partners rather than vendors.”

      Diemand-Yauman profile picture

      Diemand-Yauman/CEO, Philanthropy University

    • company logo

      Arbisoft was an invaluable partner in developing TripScanner, as they served as my outsourced website and software development team. Arbisoft did an incredible job, building TripScanner end-to-end, and completing the project on time and within budget at a fraction of the cost of a US-based developer.

      Ethan Laub profile picture

      Ethan Laub/Founder and CEO

    Contact Us
    contact

    Why LLMs Can't Count the R's in 'Strawberry' & What It Teaches Us

    September 9, 2024
    https://d1foa0aaimjyw4.cloudfront.net/S7_B8_Why_LL_Ms_Can_t_Count_the_R_s_in_Strawberry_and_What_It_Teaches_Us_03ea39f9bb.png

    Language models have become increasingly sophisticated, capable of generating human-quality text, translating languages, and even writing code. Yet, despite their impressive abilities, these LLMs still stumble on surprisingly simple tasks. One such challenge that has baffled even the most advanced LLMs is the seemingly trivial task of counting the number of "r"s in the word "strawberry."

     

    Imagine, for a moment, a supercomputer that can write a Shakespearean sonnet or solve complex mathematical equations. Yet, when presented with the simple task of counting the "r"s in a common English word, it stumbles and stutters, often producing incorrect results. 

     

    This seemingly absurd situation highlights a fundamental limitation of current LLMs - their inability to grasp the intricacies of human language at a granular level. While they can excel at higher-level tasks like summarization and translation, they often need help with the finer details, such as counting individual characters.

     

    This blog focuses on understanding why this error occurs and what it teaches us about the limitations of modern-age LLMs. Let’s dive in.

     

    Understanding LLMs and Their Limitations

    LLMs have revolutionized fields like natural language processing and creative writing in every niche. To understand how LLMs approach problem-solving, it's essential to delve into their underlying mechanisms. Let's break down the process in simple terms. 

    The Transformer Architecture

    Most modern LLMs are built on the Transformer architecture, introduced in the paper "Attention Is All You Need" in 2017. This architecture is designed to capture long-range dependencies in sequences, making it particularly well-suited for natural language processing tasks. The key components of the Transformer Architecture include:

     

    • Encoder-Decoder Architecture 

    The Transformer consists of an encoder and a decoder. The encoder processes the input sequence, while the decoder generates the output sequence. Think about translating a sentence from English to French. The encoder would process the English sentence, understanding its meaning and structure. Then, the decoder would use this information to generate the equivalent French sentence.

     

    • Attention Mechanism 

    The attention mechanism allows the model to weigh the importance of different parts of the input sequence when generating the output. This enables the model to capture long-range dependencies and context. Consider translating the sentence "The cat sat on the mat." When translating "mat," the attention mechanism would focus on the words "cat" and "on," as they are more relevant than "the" in determining the correct translation.

     

    • Self-Attention 

    Self-attention allows the model to relate different parts of the input sequence to each other, helping it understand the context and meaning of the text. Self-attention helps the model understand the relationships between different words in the input sequence. For example, in the sentence "The quick brown fox jumps over the lazy dog," self-attention would help the model understand that "quick" modifies "fox," "brown" modifies "fox," and so on.

     

    • Positional Encoding  

    Positional encoding adds information about the position of each word in the input sequence, allowing the model to capture the order of words. For instance, the meaning of the sentence "The dog bit the cat" is different from "The cat bit the dog." Positional encoding helps the model understand the importance of word order.

     

    Here are some more examples to illustrate these concepts:

     

    • Machine Translation - The encoder processes the source language sentence, while the decoder generates the target language sentence. Attention helps the model focus on relevant parts of the source sentence when translating each word in the target sentence.
    • Text Summarization - The encoder processes the input text, and the decoder generates a summary. Attention helps the model focus on the most important parts of the input text when generating the summary.
    • Question Answering - The encoder processes the question and context, and the decoder generates the answer. Attention helps the model focus on relevant parts of the context when answering the question.

     

    By understanding these components and how they work together, we can gain a deeper appreciation for the power and versatility of the Transformer architecture.

    The Problem-Solving Process

    When an LLM is presented with a problem, it follows these general steps. 

    1. Tokenization - The input text is broken down into smaller units called tokens, which can be individual words or subwords.
    2. Embedding - Each token is represented as a numerical vector, capturing its semantic meaning.
    3. Encoding - The encoder processes the input sequence, using self-attention to capture dependencies and context.
    4. Decoding (for generative tasks) - The decoder generates the output sequence, using the encoded input and attention mechanisms to produce the desired text.

     

    Example - Question Answering

    To illustrate how LLMs approach problem-solving, consider the task of question answering. When presented with a question and a context, an LLM will:

    1. Tokenize - Break down the question and context into tokens.
    2. Embed - Represent each token as a numerical vector.
    3. Encode - Process the question and context using the encoder.
    4. Decode - Generate an answer by iteratively predicting each word of the answer based on the encoded input and previously generated words.

     

    Now that we have a basic understanding of the mechanisms behind how LLMs work - let’s talk about the elephant in the room - ‘STRAWBERRY!’ 

     

    From Eliza to GPT-4, witness the incredible evolution of language models. This timeline reveals the key milestones and breakthroughs that have shaped AI language capabilities.

    Curious about how computers learn to talk? This timeline will take you on a journey through the evolution of language models - from early chatbots to today's sophisticated AI.

     

    The Case of "Strawberry"

    The word "strawberry" presents a unique challenge for LLMs because it involves multiple factors, including phonetic similarity, homophones, irregular spelling, and the limitations of current LLM architectures. 

    Tokenization and the Word "Strawberry"

    As mentioned above, tokenization is a fundamental process in natural language processing that involves breaking down text into smaller units called tokens. These tokens can be individual words, subwords, or characters, depending on the specific tokenization method used.

     

    Think of tokenization as chopping up a sentence into smaller pieces. For example, the sentence "The quick brown fox jumps over the lazy dog" could be tokenized into the following words: "The," "quick," "brown," "fox," "jumps," "over," "the," "lazy," and "dog."

    Common Tokenization Methods

    • Word-Level Tokenization - The most common method, that breaks text down into individual words. For "strawberry," this would result in two tokens - "straw" and "berry."
    • Subword Tokenization - Breaks text down into subword units, such as characters or character n-grams. This method can be more effective for handling out-of-vocabulary words and structural variations.
    • Character-Level Tokenization - Breaks text down into individual characters. This method can be useful for tasks that require fine-grained analysis of text, but it can also increase the computational complexity.

    Tokenization and "Strawberry"

    When tokenizing the word "strawberry," the choice of tokenization method can significantly impact how the LLM processes the word. For example:

     

    • Word-Level Tokenization - The LLM would represent "strawberry" as two tokens: "straw" and "berry." This might make it difficult for the LLM to accurately count the "r"s, as it may not recognize the relationship between the two tokens.
    • Subword Tokenization - A subword tokenizer might break "strawberry" down into smaller units, such as "straw," "berr," and "y." This could be more helpful for the LLM, as it provides more granular information about the word's structure.
    • Character-Level Tokenization - The LLM would represent "strawberry" as 10 individual characters: "s," "t," "r," "a," "w," "b," "e," "r," "r," and "y." This method would provide the most detailed information about the word's structure but could also be computationally expensive.

     

    The Impact of Tokenization on Character Counting

    The choice of tokenization method can directly affect the accuracy of character counting. If the tokenization method obscures the relationship between individual characters, it can be difficult for the LLM to count them accurately. For example, if "strawberry" is tokenized as "straw" and "berry," the LLM may not recognize that the two "r"s are part of the same word.

     

    To improve character counting accuracy, LLMs may need to use more sophisticated tokenization methods, such as subword tokenization or character-level tokenization, that can preserve more information about the structure of words. Additionally, LLMs may need to incorporate additional techniques, such as contextual understanding and linguistic knowledge, to help them better understand the relationship between individual characters within words.

     

    Lessons from the "Strawberry" Challenge

    The "strawberry" challenge offers valuable insights into the limitations and capabilities of LLMs. Here are some key lessons to consider:

    1. The Importance of Prompt Engineering

    LLMs sometimes struggle to understand the context of a query, especially when it involves subtle differences. This is because LLMs are trained on large datasets of text, which can introduce biases and limitations. To improve the accuracy of LLM responses, it is crucial to provide as much context as possible. For example, when asking an LLM to count the characters in a specific word, it is helpful to provide the word's definition or usage in a sentence. This signifies the importance of how we enter our prompts. 

    2. The Limitations of Tokenization

    Tokenization can be a limiting factor in character counting. Different tokenization methods can produce different results, and some methods may obscure the relationship between individual characters within a word. To address this limitation, experts are exploring new tokenization techniques that can preserve more information about the structure of words.

    3. The Need for Character-Level Processing

    While LLMs are often designed to process language at a higher level, focusing on semantic meaning rather than individual characters, there is a growing need for character-level processing in certain tasks. Character-level processing can be particularly useful for tasks that require a deep understanding of the structure of words, such as spell-checking, text normalization, and character recognition.

    4. The Impact of Biases and Limitations

    LLMs are trained on large datasets of text, which can introduce biases and limitations. These biases can manifest in various ways, such as generating offensive or harmful content, perpetuating stereotypes, or reinforcing existing inequalities. It is important to be aware of these limitations and to interpret LLM outputs with caution.

    5. The Potential for Improvement

    Despite their limitations, LLMs have the potential to become even more powerful tools for natural language processing. By addressing the challenges highlighted in the "strawberry" challenge, researchers can develop LLMs that are more accurate, reliable, and versatile.

     

    Future Directions - Overcoming the Limitations of LLMs

    As LLMs continue to evolve, innovative techniques are being explored to address the limitations, particularly in areas such as character-level processing. Here are some promising future directions:

     

    • Enhanced Fine-Grained Analysis - Character-level attention allows models to focus on individual characters, enabling more precise text analysis.
    • Improved Accuracy for Character-Based Tasks - This technique can significantly improve the accuracy of tasks like spell-checking, text normalization, and character recognition.
    • Integration with Word-Level Processing - Character-level attention can be combined with word-level processing to create a more comprehensive understanding of text.
    • Leveraging NMT for Character-Level Tasks - Techniques from Neural Machine Translation, such as attention mechanisms and sequence-to-sequence modeling, can be adapted to improve character-level processing.
    • Enhanced Contextual Understanding - NMT models can capture long-range dependencies and contextual information, which is crucial for accurate character-level analysis.
    • Improved Handling of Irregularities - NMT can help LLMs handle irregular spellings, morphological variations, and other linguistic complexities.
    • Multi-disciplinary - Hybrid approaches can leverage the strengths of different techniques to address the limitations of LLMs.

     

    Below are some examples of real-world applications of these techniques.

     

    • Google Translate - Google Translate has incorporated character-level processing and hybrid NMT techniques to improve its accuracy for languages with complex writing systems, such as Chinese and Japanese.
    • OpenAI's GPT-3 - While primarily focused on word-level processing, GPT-3 has shown some ability to handle character-level tasks, such as spell-checking and text normalization.

     

    In The End

    The "strawberry" challenge serves as a reminder of both the limitations and potential of language models. While LLMs have made significant progress in recent years, they still struggle with tasks that require a deep understanding of individual characters and their relationships within words. However, by exploring innovative techniques such as character-level attention, neural machine translation, and hybrid approaches, it is possible to overcome these limitations.

     

    As we continue to push the boundaries of natural language processing, it is essential to remain aware of the challenges and opportunities that lie ahead. By understanding the limitations of LLMs and exploring new avenues for improvement, we can create a brighter future.

      Share on
      https://d1foa0aaimjyw4.cloudfront.net/Screenshot_2023_09_26_at_12_27_12_PM_50c6c92048.png

      Hijab e Fatima

      I’m a technical content writer with a passion for all things AI and ML. I love diving deep into complex topics and breaking them down into digestible information. When I’m not writing, you can find me exploring anything and everything trending.

      Related blogs

      0

      Let’s talk about your next project

      Contact us