WebAssembly (often abbreviated as Wasm) is a binary instruction format designed for efficient execution in web browsers. It allows code written in languages like C, C++, and Rust to run in web browsers at near-native speed. WebAssembly is not intended to replace JavaScript but to work alongside it, handling performance-critical tasks. For example,
// File: add.c
#include <emscripten.h>
EMSCRIPTEN_KEEPALIVE
int add(int a, int b) {
return a + b;
}
// File: index.html
<!DOCTYPE html>
<html>
<body>
<script>
WebAssembly.instantiateStreaming(fetch('add.wasm'))
.then(result => {
const add = result.instance.exports.add;
console.log(add(5, 3)); // Output: 8
});
</script>
</body>
</html>
need to compile the C code to WebAssembly using Emscripten. The command would look something like this:
emcc add.c -s WASM=1 -s EXPORTED_FUNCTIONS='[“_add”]’ -o add.wasm
Service workers are scripts that run in the background, separate from a web page. They act as a proxy between the web application and the network, allowing you to intercept and modify network requests. It is registered in the web application’s JavaScript, advantages over browser cache.
Go (Golang): Go is a language known for its concurrency model, performance, and scalability. It’s well-suited for handling real-time data processing and large-scale APIs, which could explain Superhuman’s backend speed. This language has been widely adopted for building fast, scalable systems.
Node.js, which allows handling multiple requests simultaneously, optimizing for I/O-bound tasks such as fetching emails, syncing data, and performing background tasks.
Redis and In-Memory Databases: Superhuman needs to perform operations like email search and retrieval in milliseconds. It likely leverages Redis or other in-memory databases for quick data caching and session management. This ensures that frequently accessed data, like email metadata, is available instantly.
Elasticsearch or Meilisearch: For fast, powerful search functionality, These are scalable search engines designed to provide quick and accurate results, crucial for users who rely on searching through thousands of emails. (Elasticsearch stores data in JSON documents within indices. An index is like a database in a relational system. Documents are the basic unit of information that can be indexed, similar to a row in a relational database. Elasticsearch stores data in JSON documents within indices. An index is like a database in a relational system. Documents are the basic unit of information that can be indexed, similar to a row in a relational database. Elasticsearch uses an inverted index structure, which is excellent for full-text search. When you index a document, Elasticsearch breaks the text fields into terms and creates an index of all the unique terms. It then maps each term to the documents that contain it. Elasticsearch is distributed by nature, allowing it to split indices into shards that can be distributed across multiple nodes in a cluster. This enables Elasticsearch to handle large amounts of data and provide high availability and fault tolerance.)
WebSockets. This enables persistent connections between the client and the server for real-time updates without constantly polling.
Firebase Cloud Messaging (FCM) or Apple Push Notification Service (APNS) to deliver push notifications in real time when an email arrives or when there’s a calendar update.
IMAP and SMTP Protocols: Like most email clients, Superhuman likely interacts with Gmail, Outlook, and other email providers using IMAP (for receiving) and SMTP (for sending) protocols. These standard email protocols facilitate communication between the client and the mail server.
Custom API Wrapping for Providers: Superhuman may also wrap specific API calls around Gmail’s and Microsoft Outlook’s APIs to optimize for things like faster email fetching,
smart inbox prioritizes important emails and organizes them based on user preferences and behavior. To achieve this, it uses AI and machine learning.
End-to-End Encryption: Superhuman likely uses TLS (Transport Layer Security) for encrypting data during transmission. While email itself can’t always be fully encrypted (due to standard email protocols), Superhuman likely encrypts data in transit and uses strong authentication methods to protect accounts.
OAuth 2.0 to authenticate users securely without needing to store their passwords. OAuth tokens allow third-party apps like Superhuman to access email without compromising security.
Data Storage Compliance: Superhuman would also need to comply with various data protection regulations (e.g., GDPR, CCPA) to ensure that user data is handled appropriately and stored securely
Electron to build cross-platform desktop apps for macOS and Windows. Electron enables web technologies like HTML, CSS, and JavaScript to be used to create native-like desktop applications.
cross-platform frameworks like React Native or Flutter
Storage Layer: Snowflake uses the cloud provider’s storage services (e.g., AWS S3, Azure Blob Storage, or Google Cloud Storage) to store data in a compressed, optimized columnar format. Data is kept in a central, durable location, which is cost-effective and highly scalable.
Compute Layer: Compute resources (called virtual warehouses) can be spun up and down as needed, allowing users to allocate compute power to specific workloads. Each virtual warehouse can be scaled independently based on workload needs (e.g., querying, data loading, or analytics).
Open Banking movement, which promotes secure data sharing through open APIs. Open Banking enables MX to securely retrieve data from banks and financial institutions that comply with these standards, allowing users to control which apps and services have access to their financial data.
cloud-native architecture, leveraging cloud platforms such as AWS, Microsoft Azure, and Google Cloud. This allows Sendbird to dynamically scale both horizontally (adding more servers) and vertically (increasing server capacity) based on demand.
microservices architecture, where different functionalities like messaging, notifications, and user management are broken down into separate, independently scalable services. This modularity allows Sendbird to handle millions of concurrent users and real-time events.
Load balancers. Load balancing ensures that user requests are evenly distributed to avoid server overloads and minimize latency. This is crucial for maintaining high availability and real-time responsiveness.
WebRTC (Web Real-Time Communication). WebRTC is a free, open-source framework that enables real-time voice and video communication through web browsers and mobile devices without requiring third-party plugins.
NoSQL databases like Amazon DynamoDB or MongoDB to store large amounts of user messages and channel data. NoSQL databases are optimized for scalability, offering fast read/write performance for applications with high data throughput.
Chromium for rendering its user interface. Chromium is the open-source engine behind the Chrome browser, which allows VS Code to display its UI elements like menus, panels, and the text editor in a web-like environment but on the desktop.
Language Server Protocol (LSP), which standardizes how IDEs and code editors interact with programming language-specific features like code completion, syntax highlighting, and error checking. This allows VS Code to support a wide range of programming languages.
API Connectivity: Plaid’s core service revolves around its robust API, which allows apps to connect to users’ bank accounts, credit cards, and other financial services. Plaid standardizes and normalizes data across different banks and financial institutions, making it easy for developers to access transaction histories, balances, and account details.
Data Normalization: Plaid converts raw financial data from various institutions into a consistent format. This simplifies integration for developers who would otherwise need to manage multiple formats and types of financial data.
Multi-Factor Authentication (MFA): Plaid supports MFA to add another layer of security when users log into their financial accounts, ensuring protection against unauthorized access.
Plaid Link: Plaid provides a pre-built, customizable front-end user interface (UI) called Plaid Link that developers can easily integrate into their applications. This UI simplifies the process of connecting user accounts securely, allowing users to link their financial accounts to apps with just a few clicks.
Plaid uses machine learning to automatically categorize and enrich raw transaction data. It can recognize patterns and classify spending into categories like “groceries,” “rent,” or “entertainment,” giving users and developers deeper insights into financial behavior.
Plaid Insights: The company leverages predictive analytics and machine learning to provide more detailed insights into users’ spending and saving patterns. This enriched data can be used for budgeting, credit risk assessment, or financial advice applications.
Anomaly Detection: Plaid uses algorithms to detect anomalies in transaction data, such as potential fraud or unusual activity. This helps developers build financial apps with better fraud detection mechanisms.
Workstation is built using C++ and Microsoft’s .NET Framework (in C# likely) to provide a high-performance, native desktop application. These technologies allow for efficient handling of large datasets and real-time data streams.
likely uses WPF for rendering its user interface. WPF is part of the .NET framework and allows for the creation of rich, interactive UIs that can handle complex data visualizations like heat maps, charts, and grids.
Web-Based UI Components (React, JavaScript): While the core FactSet Workstation is a desktop application, FactSet has also built web-based components for integration into browser environments. React and JavaScript are commonly used to build responsive, interactive user interfaces for web-based tools within the platform.
microservices architecture, where various services (such as data retrieval, financial modeling, or portfolio management) are developed and deployed independently. This allows for modularity and scalability across the platform.
may use Kafka or RabbitMQ to stream financial data efficiently to users. These technologies allow for low-latency data delivery and ensure that updates happen instantaneously.
in-memory data processing to speed up complex financial computations and data retrieval. By keeping frequently accessed data in memory, the platform minimizes latency and ensures fast response times when analyzing large datasets.
Socket Programming (WebSockets): To provide real-time updates on stock prices, market changes, and news, FactSet Workstation likely uses WebSockets or other low-latency communication protocols that allow data to be pushed to the client instantly.
Highcharts and D3.js to create interactive and customizable charts, graphs, and financial models. These tools are essential for visualizing time series data, heat maps, and correlation matrices.
SSO technologies to ensure secure access, allowing users to authenticate using their organization’s identity provider.
uses distributed computing frameworks to process large datasets in parallel. This ensures that users can perform complex analyses on massive datasets (e.g., portfolio backtesting, risk simulations) without performance bottlenecks.
technologies like real-time synchronization and cloud-based sharing, allowing for easy collaboration across global teams.
VMS was a general-purpose operating system known for its enterprise-grade reliability, robust security, and multi-user capabilities. FONIX was a proprietary, specialized OS developed by FactSet for handling real-time financial data and providing a platform for their financial analytics services