“Arbisoft is an integral part of our team and we probably wouldn't be here today without them. Some of their team has worked with us for 5-8 years and we've built a trusted business relationship. We share successes together.”
“They delivered a high-quality product and their customer service was excellent. We’ve had other teams approach us, asking to use it for their own projects”.
81.8% NPS Score78% of our clients believe that Arbisoft is better than most other providers they have worked with.
Arbisoft is your one-stop shop when it comes to your eLearning needs. Our Ed-tech services are designed to improve the learning experience and simplify educational operations.
“Arbisoft has been a valued partner to edX since 2013. We work with their engineers day in and day out to advance the Open edX platform and support our learners across the world.”
Get cutting-edge travel tech solutions that cater to your users’ every need. We have been employing the latest technology to build custom travel solutions for our clients since 2007.
“I have managed remote teams now for over ten years, and our early work with Arbisoft is the best experience I’ve had for off-site contractors.”
As a long-time contributor to the healthcare industry, we have been at the forefront of developing custom healthcare technology solutions that have benefitted millions.
I wanted to tell you how much I appreciate the work you and your team have been doing of all the overseas teams I've worked with, yours is the most communicative, most responsive and most talented.
We take pride in meeting the most complex needs of our clients and developing stellar fintech solutions that deliver the greatest value in every aspect.
“Arbisoft is an integral part of our team and we probably wouldn't be here today without them. Some of their team has worked with us for 5-8 years and we've built a trusted business relationship. We share successes together.”
Unlock innovative solutions for your e-commerce business with Arbisoft’s seasoned workforce. Reach out to us with your needs and let’s get to work!
The development team at Arbisoft is very skilled and proactive. They communicate well, raise concerns when they think a development approach wont work and go out of their way to ensure client needs are met.
Arbisoft is a holistic technology partner, adept at tailoring solutions that cater to business needs across industries. Partner with us to go from conception to completion!
“The app has generated significant revenue and received industry awards, which is attributed to Arbisoft’s work. Team members are proactive, collaborative, and responsive”.
“Arbisoft partnered with Travelliance (TVA) to develop Accounting, Reporting, & Operations solutions. We helped cut downtime to zero, providing 24/7 support, and making sure their database of 7 million users functions smoothly.”
“I couldn’t be more pleased with the Arbisoft team. Their engineering product is top-notch, as is their client relations and account management. From the beginning, they felt like members of our own team—true partners rather than vendors.”
Arbisoft was an invaluable partner in developing TripScanner, as they served as my outsourced website and software development team. Arbisoft did an incredible job, building TripScanner end-to-end, and completing the project on time and within budget at a fraction of the cost of a US-based developer.
Liquid AI has introduced its first Liquid Foundation Models (LFMs) series. A new class of generative AI designed to combine high performance with efficiency. These models aim to push the boundaries of what AI can achieve at every scale, delivering top-tier results with smaller memory requirements and faster processing times.
What Makes LFMs Different?
Built from first principles, LFMs take inspiration from engineering disciplines like dynamical systems, signal processing, and numerical linear algebra. This design approach allows them to offer more control and better efficiency than traditional models. LFMs are versatile enough to handle different types of sequential data, whether it’s video, audio, text, time series, or signals.
Liquid AI’s focus is not just on large-scale models but on building adaptable systems that can maintain performance across different environments. Their name, “Liquid,” reflects the dynamic and adaptive nature of these models.
Key Models in the LFM Series
1. LFM-1.3B: Designed for resource-constrained environments with minimal memory usage.
2. LFM-3.1B: Optimized for edge deployment, allowing it to run efficiently on local devices.
3. LFM-40.3B Mixture of Experts (MoE): Built to handle complex tasks with a highly efficient architecture that balances workload dynamically.
These models aim to demonstrate that performance isn’t just about scale but also about innovation in design.
Performance and Availability
Liquid AI reports that the LFM-1B model has set a new benchmark in its category, outperforming other 1B models evaluated using Eleuther AI’s lm-evaluation-harness v0.4. This marks a significant achievement, as it’s the first time a non-GPT architecture has surpassed transformer-based models at this scale.
The LFMs are now available on several platforms:
Liquid Playground
Lambda (Chat UI and API)
Perplexity Labs
Additionally, they will soon be accessible through Cerebras Inference. The models are being optimized for various hardware, including NVIDIA, AMD, Qualcomm, Cerebras, and Apple.
Benchmarks
LFM-1B Preview 1.3B
OpenELM (Apple) 1.1B
Llama 3.2 (Meta) 1.2B
Phi 1.5 (Microsoft) 1.4B
Stable LM 2 (Stability) 1.6B
RWKV 6 (RWKV) 1.6B
Smol LM (Hugging Face) 1.7B
Danube 2 (H2O) 1.8B
Rene (Cartesia) Base 1.3B
R Gemma 2 (Google) Base 2.7B
Context length (tokens)
32k
1k
128k
2k
4k
1k
2k
8k
-
256k
MMLU (5 shot)
58.55
25.65
45.46
42.26
41.06
26.02
28.46
37.63
32.61
34.38
MMLU-Pro (5 shot)
30.65
11.19
19.41
16.80
16.73
11.61
10.94
14.00
12.27
11.78
Hellaswag (10-shot)
67.28
71.8
59.72
64.03
69.33
61.46
62.52
73.99
69.93
72.24
ARC-C (25-shot)
54.95
41.64
41.3
53.75
44.11
36.95
45.48
43.77
38.91
46.76
GSM8K (5-shot)
55.34
0.38
33.36
31.61
41.55
5.76
0.38
31.92
2.58
17.74
The LFM-3B model offers impressive performance for its size, setting a new benchmark among models with 3 billion parameters, including transformers, hybrids, and RNNs. It even surpasses many older models in the 7B and 13B parameter range. Additionally, LFM-3B performs at a comparable level to Phi-3.5-mini across several benchmarks, despite being 18.4% smaller. This makes it a top choice for mobile and edge text-based applications, where efficiency and performance are both crucial.
Benchmark
LFM-3B Preview 3.1B
Gemma 2 (Google) 2.6B
Zamba 2 (Zyphra) 2.7B
AFM Edge (Apple) 3B
Llama 3.2 (Meta) 3.2B
Phi-3.5 (Microsoft) 3.8B
Mistral-7b v0.3 (Mistral AI) 7B
Llama 3.1 (Meta) 8B
Mistral Nemo (Mistral AI) 12.2B
Context length (tokens)
32k
8k
-
32k
128k
128k
4k
128k
128k
MMLU (5 shot)
66.16
56.96
56*
60.64*
59.65
68.91
62.04
67.92
68.47
MMLU-Pro (5 shot)
38.41
27.32
-
-
30.07
38.31
30.35
37.72
35.56
Hellaswag (10-shot)
78.48
71.31
76*
55.24*
73.36
78.84
84.62
80.00
84.31
ARC-C (25-shot)
63.99
57.94
56*
45.39*
52.65
64.51
64.16
60.58
65.70
GSM8K (5-shot)
70.28
44.28
-
-
64.9
79.15
49.05
75.44
73.54
*Scores reported by the developers. All the other scores were calculated with the same evaluation harness we used for our own models.
The LFM-40B model strikes an optimal balance between size and output quality. It utilizes 12B active parameters during operation, delivering performance on par with larger models. Thanks to its Mixture of Experts (MoE) architecture, LFM-40B achieves higher throughput while remaining compatible with more affordable hardware, making it both powerful and cost-efficient for deployment.
Benchmark
LFM-40 Preview 40B A12B
Jamba 1.5 (AI21) 52B A12B
Mixtral (Mistral) 47B A13B
Qwen 2 (Alibaba) 57B A14B
Gemma 2 (Google) 27B
Yi 1.5 (01.AI) 34B
AFM Server (Apple)
Llama 3.1 (Meta) 70B
Context length (tokens)
32k
256k
8k
32k
128k
32k
32k
128k
MMLU (5 shot)
78.76
59.57
73.42
75.75
76.20
76.19
75.3*
82.25
MMLU-Pro (5 shot)
55.63
28.69
38.12
47.47
45.69
45.13
-
52.89
Hellaswag (10-shot)
82.07
77.16
87.54
85.96
85.79
85.37
86.9*
86.40
ARC-C (25-shot)
67.24
60.90
71.33
66.89
74.83
69.11
69.7*
70.39
GSM8K (5-shot)
76.04
46.47
64.22
77.79
84.53
79.68
72.4*
88.10
*Scores reported by the developers. All the other scores were calculated with the same evaluation harness we used for our own models.
LFMs Prioritize Memory Efficiency
LFMs are designed to be more memory-efficient than traditional transformer models, especially when handling long inputs. In transformer-based LLMs, the KV cache expands linearly with the sequence length, consuming more memory. LFMs, however, use optimized input compression techniques, allowing them to process longer sequences without requiring additional hardware. This efficiency gives LFMs an edge over other models in the 3B parameter range by maintaining a smaller memory footprint, even for complex tasks.
Image Source: Liquid. (Total inference memory footprint of different language models vs. the input+generation length).
LFMs Make the Most of Their Context Length
A new standard is being set for models of this size with the introduction of 32k token LFMs in this preview release. Their ability to manage such long contexts ensures they perform optimally even with extended inputs. According to the RULER benchmark (Hsieh et al., 2024), a context length is considered effective if it achieves a score higher than 85.6, a threshold that LFMs meet with ease. The table below highlights how LFMs stack up against other models at various context lengths.
Model
Claimed length
Effective length
4k
8k
16k
32k
64k
Gemma 2 2B (Google)
8k
4k
88.5
0.60
-
-
-
Llama 3.2 3B (Meta)
128k
4k
88.7
82.4
78.3
74.1
-
Phi-3.5 3.8 B (Microsoft)
128k
32k
94.3
91.7
90.9
87.3
78.0
Llama 3.1 8B (Meta)
128k
32k
95.5
93.8
91.6
87.4
84.7
LFM-3B
32k
32k
94.4
93.5
91.8
89.5
-
The efficient context window of LFMs makes it possible to perform long-context tasks on edge devices for the first time. This advancement opens doors for developers to build new applications, such as document analysis, text summarization, and more meaningful interactions with context-aware chatbots. It also enhances Retrieval-Augmented Generation (RAG) by improving the model's ability to handle extended input.
Looking ahead, the focus is on scaling LFMs further across model size, compute time, and context length. In addition to language-based LFMs, Liquid AI is working on models tailored for different data types, domains, and applications, with more releases planned in the coming months.
Advancing the Pareto Frontier of Large AI Models
Liquid AI has made significant strides in optimizing its pre-training and post-training processes, along with the infrastructure supporting its models. This effort focuses on five essential criteria to ensure the models excel:
1. Knowledge Capacity
This aspect highlights the range and depth of information that the models can handle across various domains and tasks, regardless of their size. Liquid AI achieves this by utilizing a diverse pre-training dataset and implementing advanced model architectures. They have also introduced new strategies for pre-training, mid-training, and post-training, allowing LFMs to compete effectively with larger models in knowledge-based tasks.
2. Multi-Step Reasoning
This means being able to break down a problem and think logically about it. Liquid AI has emphasized this type of thinking during important training stages, which helps improve the analytical abilities of their smaller model architectures. This way, even compact models can perform strong reasoning tasks effectively.
3. Long Context Recall
The maximum input size of a model isn't the same as how effectively it can recall information from that input. Liquid AI specifically trained LFMs to enhance their recall performance and in-context learning abilities throughout the entire range of input. This means they can better remember and use information from longer inputs.
4. Inference Efficiency
Transformer-based models often use a lot of memory when handling long inputs, making them less suitable for use on edge devices. In contrast, LFMs maintain almost constant inference time and memory usage. This means that as the input length increases, it doesn't significantly slow down generation speed or require much more memory.
5. Training Efficiency
Training models similar to GPT typically require a lot of computing power. However, LFMs are designed to be efficient when training on long-context data, making the process more manageable and less resource-intensive.
Rethinking Model Architecture for Enhanced Performance
Drawing from extensive research on creating effective and efficient learning systems, Liquid AI has created a new design framework for foundation models that considers various modalities and hardware needs. The aim is to go beyond traditional Generative Pre-trained Transformers in developing these models.
With LFMs, Liquid AI is putting into action new principles and methods that their team has developed over the past few months to guide the design of these models.
LFMs Use Structured Operators
LFMs are made up of structured computational units, which are the core building blocks of the architecture within a new design framework. These Liquid systems and how they are put together help increase knowledge capacity and reasoning skills. They also make training more efficient, reduce memory costs during inference, and improve performance when working with different types of data, such as video, audio, text, time series, and signals.
LFMs Architecture: A Controlled Approach
The design of LFMs plays a key role in shaping their scaling, inference, alignment, and analysis strategies. This allows for a detailed examination of LFMs’ dynamics using established signal processing techniques, providing insights into their behavior from the outputs they generate to their internal operations.
LFMs Are Adaptive: A Foundation for AI at Every Scale
LFMs can automatically adjust their architectures to suit specific platforms, such as Apple, Qualcomm, Cerebras, and AMD. This adaptability allows them to meet various parameter requirements and optimize inference cache size, making them versatile for AI applications at any scale.
Defining the Liquid Design Space
Liquid's design space is shaped by the featurization and footprint of architectures and their core operators. Featurization involves transforming input data, such as text, audio, images, and video, into a structured set of features or vectors. These features help modulate computations within the model in an adaptive way. For instance, audio and time series data typically require less featurization due to their lower information density compared to language and multimodal data.
Another crucial aspect is the computational complexity of the operators. By effectively navigating and utilizing the design space of structured adaptive operators, we can enhance performance while maintaining controlled computational requirements.
At their foundation, LFMs consist of computational units that act as adaptive linear operators, with their actions shaped by the inputs they receive. The LFM design framework integrates various existing computational units in deep learning, offering a systematic method for exploring different architectures. Our analysis specifically enhances model building by focusing on three important areas:
Token-mixing structure – how the operator mixes embeddings within the input sequence.
Channel-mixing structure – how it combines different channel dimensions.
Featurization – which modulates computation based on the input context.
Image Source: Liquid.
Invitation to Early Adoption of LFMs
Liquid AI is currently in the early stages of developing its Liquid Foundation Models (LFMs) and invites collaboration to explore the strengths and weaknesses of these innovative systems.
Strengths of Language LFMs:
General and Expert Knowledge: LFMs demonstrate a robust understanding of various topics, catering to both general inquiries and specialized knowledge.
Mathematics and Logical Reasoning: These models are equipped to tackle mathematical problems and logical reasoning tasks effectively.
Efficient Long-Context Processing: LFMs excel in handling tasks that require processing long contexts efficiently.
Multilingual Capabilities: While primarily designed for English, they also support multiple languages, including Spanish, French, German, Chinese, Arabic, Japanese, and Korean.
Liquid AI is committed to an open-science approach, contributing to the advancement of the AI field by publishing findings and methodologies through scientific and technical reports. They plan to release relevant data and models generated from their research to the broader AI community. However, due to the significant time and resources invested in developing these architectures, they are not currently open-sourcing their models, allowing continued progress and a competitive edge in the AI landscape.
For enterprises seeking to explore the latest advancements in AI, engagement with Liquid AI is encouraged. The company is in the early stages of its journey, actively innovating in foundation model development and deployment. They welcome feedback and insights from enthusiastic users, inviting participation in efforts to enhance the capabilities of their models.
I have nearly five years of experience in content and digital marketing, and I am focusing on expanding my expertise in product management. I have experience working with a Silicon Valley SaaS company, and I’m currently at Arbisoft, where I’m excited to learn and grow in my professional journey.