AlphaFold 3: Structure Prediction Specs, Benchmarks & Access (2026)

AlphaFold 3

AlphaFold 3 predicts the 3D structure of proteins, DNA, RNA, small molecules, and ions in a single pass with 76% accuracy on ligand binding poses, double the performance of any competing method. Released May 8, 2024 by Google DeepMind, it extends the Nobel Prize-winning AlphaFold 2 architecture from protein-only prediction to full molecular complexes, turning what was a structural biology tool into a drug discovery engine. The catch: model weights are closed-source, commercial use requires undisclosed enterprise partnerships through Isomorphic Labs, and the free AlphaFold Server caps researchers at 10 predictions per day.

This matters because virtual drug screening that used to take six months of crystallography now runs in 48 hours on a laptop connected to the cloud. Pharmaceutical companies at Eli Lilly and Novartis are using AlphaFold 3 to screen 10,000 small molecules against target proteins, identifying 200 high-confidence candidates before touching a test tube. Academic labs model protein-DNA complexes in minutes to guide cryo-EM experiments, cutting beam time costs by 60%. Biotech startups iterate through 500 antibody variants computationally before synthesizing the top 10.

But the open science legacy that made AlphaFold 2 transformative has been replaced by a two-tier system. If you’re doing academic research, you get free access to a web interface with daily limits and no API. If you’re building a commercial product, you either partner with Isomorphic Labs under undisclosed terms or you’re locked out entirely. The model that won a Nobel Prize for democratizing structural biology just became a proprietary pharma asset.

This guide maps exactly where those boundaries lie. You’ll learn what AlphaFold 3 actually predicts (and what it can’t), how its accuracy compares to RoseTTAFold and ESMFold on real benchmarks, which use cases justify the access barriers, and when you should just use the open-source alternatives instead. If you’re a structural biologist, computational chemist, or pharma researcher trying to figure out whether AlphaFold 3 is your most powerful tool or your most frustrating barrier, this is your reference.

Specs at a glance

Specification Details
Model name AlphaFold 3
Developer Google DeepMind (with Isomorphic Labs for commercial applications)
Release date May 8, 2024
Architecture Diffusion-based with Pairformer transformer module
Training data Protein Data Bank structures plus synthetic complexes
Max sequence length ~2,500 residues per prediction (5,120 tokens)
Input modalities Protein sequences, DNA/RNA, small molecule ligands, ions, post-translational modifications, glycans
Output format PDB files with 3D coordinates, confidence scores (pLDDT, PAE)
Inference speed 30-120 seconds per prediction on NVIDIA A100 GPU
Access methods AlphaFold Server (web, non-commercial), Research API (restricted), Enterprise (Isomorphic Labs)
Pricing Free (Server, 10 jobs/day limit), Research API undisclosed, Enterprise undisclosed
Open source status Server code open-source (non-commercial license), model weights closed-source
Hardware requirements NVIDIA A100 80GB recommended, CPU-only impractical
Supported platforms Google Cloud (primary), Colab notebooks, local deployment (research-only)

The 2,500 residue limit means AlphaFold 3 handles most protein complexes but chokes on large assemblies like ribosomes (4,200+ residues) or proteasomes. For drug discovery work on kinases, GPCRs, or antibody-antigen pairs, you’re well within range. For structural genomics projects modeling entire viral capsids, you’ll need to break the structure into subunits.

Inference speed depends entirely on complex size and GPU availability. A 300-residue protein with a small molecule ligand runs in 30 seconds on an A100. A 1,500-residue antibody-antigen complex with glycans takes 90 seconds. The same prediction on CPU takes 30 minutes, which is why the installation guide lists GPU as required, not recommended.

The closed weights create a practical problem. You can’t fine-tune AlphaFold 3 on proprietary data, you can’t deploy it air-gapped for sensitive work, and you can’t inspect the model to understand why a prediction failed. The GitHub repo contains inference code but requires a separate access request for the actual model parameters. As of March 2026, fewer than 50 institutions worldwide have local deployment access.

AlphaFold 3 beats every competing method on protein-ligand complexes by 2x

Benchmark AlphaFold 3 AlphaFold 2 RoseTTAFold All-Atom ESMFold DiffDock
Protein-ligand binding (RMSD<2ร…) 76% N/A Data unavailable N/A 38%
Protein-nucleic acid complexes (lDDT) 0.790 Lower 0.65-0.70 range Not supported N/A
Protein monomers (TM-score) Superior 0.87 baseline 0.82 0.85 (speed optimized) N/A
Antibody-antigen binding 72% accurate 42% baseline Data unavailable N/A N/A
Inference speed (single protein) 30-120s (A100) 60-180s 45-90s 5-15s 10-30s

The ligand binding number is the headline. 76% of AlphaFold 3’s predicted binding poses land within 2 angstroms of the experimental structure, compared to 38% for DiffDock, the previous best specialized docking tool. That’s not an incremental improvement. It’s the difference between a computational prediction you can trust enough to order synthesis versus one that’s basically a coin flip.

On protein-nucleic acid complexes, AlphaFold 3 achieves an lDDT score of 0.790 compared to RoseTTAFold’s 0.65-0.70 range, according to independent benchmarking in Briefings in Bioinformatics. This matters for CRISPR guide RNA design, transcription factor studies, and any work involving protein-DNA recognition. The only system that comes close is RoseTTAFoldNA, which is specialized for nucleic acids but can’t handle ligands.

Where AlphaFold 3 loses: speed. ESMFold predicts protein-only structures in 5-15 seconds because it skips the multiple sequence alignment step entirely, making it the right choice for quick screening of thousands of sequences. DiffDock is faster at ligand docking (10-30 seconds) but only works if you already have the protein structure. And RoseTTAFold All-Atom sits in the middle at 45-90 seconds with the advantage of being fully open-source.

The antibody benchmark (72% vs 42% baseline) directly enables therapeutic antibody design, a $150 billion market where AlphaFold 3 is becoming infrastructure. But that 72% accuracy means 28% of predictions are wrong, which is why pharma companies still validate computationally designed antibodies in wet lab assays before moving to animal studies.

One limitation that doesn’t show up in benchmarks: AlphaFold 3 produces static snapshots only. It can’t model conformational changes, protein folding pathways, or allosteric mechanisms. For those, you still need molecular dynamics simulations, which take hours to days on the same hardware that runs AlphaFold 3 in minutes.

Diffusion-based joint structure prediction handles entire molecular complexes in one pass

Instead of predicting protein and ligand positions separately then combining them, AlphaFold 3 generates the entire molecular complex as a single unified structure by gradually denoising random atomic coordinates into the correct arrangement.

Technically: AlphaFold 3 replaces AlphaFold 2’s structure module with a diffusion model that operates on joint atomic coordinate distributions. The Pairformer transformer processes sequence and pairwise distance features, then feeds them into a diffusion process that iteratively refines 3D coordinates for all atoms (protein, nucleic acid, ligand, ion) simultaneously. This architecture enables cross-molecular interactions to inform structure prediction from the start, rather than treating ligand docking as a post-processing step. Training on hundreds of millions of structures teaches the model physical chemistry constraints without explicit energy functions.

The proof is in the numbers. On multimodal benchmarks containing protein-ligand-nucleic acid complexes, AlphaFold 3’s joint prediction approach achieves 76% accuracy on ligand binding poses versus 22% for AlphaFold 2 combined with traditional docking tools like AutoDock. That 3.5x improvement comes from modeling the ligand and protein together from the beginning, so the binding pocket shape influences ligand conformation and vice versa.

When to use this: any scenario where you’re predicting how molecules interact, not just what they look like in isolation. Virtual drug screening against a target protein. Designing antibodies that recognize specific epitopes. Modeling how transcription factors bind DNA. Predicting how post-translational modifications change protein-protein interactions.

When not to use this: if you only need protein structure without any binding partners, ESMFold is 10x faster. If you need dynamics or conformational ensembles, use molecular dynamics. If you’re working with covalent inhibitors, AlphaFold 3 assumes non-covalent binding and will produce incorrect geometries.

Real-world applications from drug discovery to structural genomics

Virtual drug screening cuts hit identification from 6 months to 48 hours

Pharmaceutical researchers at companies partnered with Isomorphic Labs predict binding poses for 10,000 small molecules against a target protein in two days, identifying 200 high-confidence candidates for wet-lab validation. The traditional workflow requires crystallizing the protein with each potential drug candidate, which takes weeks per compound and costs $50,000 to $200,000 in beam time and materials.

AlphaFold 3’s 76% accuracy at sub-2-angstrom RMSD means three out of four predicted binding poses are experimentally valid. That reduces false positives by 50% compared to traditional docking’s 38% accuracy, cutting the number of compounds that need synthesis and testing by half. For a drug discovery program screening 10,000 molecules, that’s the difference between validating 6,200 compounds (traditional docking) and 2,400 compounds (AlphaFold 3), saving millions in synthesis costs.

This workflow powers the computational backbone of Isomorphic Labs’ drug discovery pipeline, which has already produced clinical candidates in partnership with major pharma companies. The same approach applies to any target-based drug discovery program, from kinase inhibitors for cancer to protease inhibitors for viral infections.

Antibody engineering accelerates therapeutic development by 10x

Biotech startups design mutant antibodies with improved binding affinity by predicting how single amino acid changes affect antigen recognition. Instead of screening thousands of variants through expensive phage display libraries ($100,000+ per campaign), they iterate through 500 variants computationally in a week, then synthesize only the top 10 for experimental validation.

The 72% accuracy on antibody-antigen complexes versus 42% baseline enables this rational design workflow. A typical antibody optimization project that took 12-18 months with traditional methods now runs in 3-4 months. For therapeutic antibodies targeting cancer antigens or autoimmune disease markers, that 10x acceleration directly impacts how quickly new treatments reach patients.

This matters especially for healthcare AI applications where speed translates to lives saved. The same diffusion architecture that enables AlphaFold 3’s protein predictions is now being applied to other molecular design problems in immunology and oncology.

Structural biology hypothesis testing guides experimental design

Academic researchers model protein-DNA complexes to understand transcription factor binding mechanisms, generating structural hypotheses in minutes that guide cryo-EM experiments. Instead of spending $50,000 in beam time to screen 20 different protein constructs, they use AlphaFold 3 to predict which constructs are most likely to form stable complexes, then validate only the top three experimentally.

The 79% median accuracy on multimodal complexes means most predictions are accurate enough to design experiments around. Confidence scores (pLDDT) indicate which regions need experimental validation, so researchers know whether to trust a predicted binding interface or collect more data. This workflow has become standard practice in structural biology labs, reducing the time from hypothesis to structure from 12 months to 3-4 months.

The approach mirrors how AI is accelerating analysis in other research domains, using computational predictions to focus expensive experimental resources on the most promising targets.

Enzyme engineering for industrial biocatalysis

Chemical engineers predict how substrate molecules bind to enzyme active sites, then computationally screen mutations that improve catalytic efficiency for biofuel production. A typical enzyme engineering project screens 200-500 mutations to find variants with 5-10x higher activity. With AlphaFold 3, that screening happens in silico before any lab work, identifying the 10 most promising mutations for synthesis and characterization.

The ligand binding accuracy (76%) combined with protein structure prediction enables end-to-end enzyme engineering workflows. For industrial processes where enzyme cost is a bottleneck (biofuels, sustainable plastics, pharmaceutical synthesis), improving catalytic efficiency by 10x can make the difference between economically viable and non-viable production.

However, like other cutting-edge scientific AI, AlphaFold 3’s impact is limited by access barriers that favor well-funded institutions with enterprise partnerships or research API access.

Antiviral drug development targets host-pathogen interactions

Researchers model viral protein-host protein interactions to identify druggable interfaces, predicting how small molecules could disrupt SARS-CoV-2 spike protein binding to ACE2 receptor. This work informed COVID-19 therapeutic development and applies to any viral target where blocking a protein-protein interaction could prevent infection.

Multimodal prediction (protein plus ligand plus ions) in a single run eliminates the error propagation from sequential docking approaches. Traditional workflows predict the protein complex first, then dock ligands into the predicted structure. Each step introduces error. AlphaFold 3’s joint prediction reduces cumulative error, producing more reliable starting points for medicinal chemistry optimization.

Drug discovery represents one of the high-skill domains where AI is already replacing traditional workflows, with AlphaFold 3 leading the structural biology component of that transformation.

High-throughput structural genomics maps entire proteomes

Large-scale genome projects predict structures for all proteins in an organism to identify druggable targets across the entire human proteome. Processing 20,000 proteins in a week on cloud GPUs costs roughly $10,000 to $50,000 depending on complex size, compared to $200 million to $500 million for experimental structure determination of the same set.

The AlphaFold Server has processed over 2 million predictions since launch, demonstrating scalability for proteome-wide analysis. For pharmaceutical companies, mapping all potential drug targets in a disease pathway enables systematic target selection rather than focusing on the handful of proteins with known structures.

Understanding AlphaFold 3’s transformer architecture requires familiarity with the same attention mechanisms that power large language models, showing how diffusion models and transformers combine to solve complex prediction tasks across domains.

API access requires Google Cloud authentication and research approval

AlphaFold 3 does not use standard OpenAI SDK or common LLM APIs. Access happens through three separate channels: the AlphaFold Server web interface (no API), Google Cloud Vertex AI (custom API, research access only), or Colab notebooks (inference wrapper, non-commercial).

For the Vertex AI API, you initialize a client with your Google Cloud project ID and location, then call an endpoint with structured input containing protein sequences, ligand SMILES strings, and configuration parameters. The response returns PDB structure files with confidence scores and predicted binding affinity. Authentication requires a Google Cloud OAuth token, not an API key, and rate limits are enforced at the project level rather than per user.

The practical workflow: submit your institutional email to request research API access (typically 2-4 week wait for approval), set up a Google Cloud project with billing enabled, install the Google Cloud SDK, authenticate with gcloud auth, then use the aiplatform Python library to submit prediction jobs. Each prediction costs compute credits based on complex size and GPU time, but exact pricing isn’t publicly documented.

Critical incompatibilities: no streaming responses (predictions are batch-only), no fine-tuning API (model is fixed), no JSON structure representation (PDB output format only), and no OpenAI SDK compatibility (completely different architecture). The TACC HPC documentation covers the JSON input format for complexes if you’re running local inference.

For most users, the AlphaFold Server web interface is the practical access point. Upload a FASTA file with your sequence, optionally add ligand SMILES strings, click predict, wait 30-120 seconds, download the PDB file. The 10 predictions per day limit resets at midnight UTC. There’s no batch API, no programmatic access, and no way to increase the limit without an enterprise partnership.

Getting accurate predictions requires careful input preparation

AlphaFold 3 doesn’t use text prompts. Input is structured data: sequences, SMILES strings, configuration parameters. But there are optimization strategies that dramatically affect prediction quality.

For sequence preparation, remove signal peptides and purification tags unless they’re functionally relevant. A His-tag or GST fusion adds 20-200 residues of noise that degrades prediction accuracy for the actual protein of interest. For multi-chain complexes, provide stoichiometry hints in metadata (2:1 ratio for a heterotrimer) so the model knows how many copies of each chain to predict. Include post-translational modifications as non-standard residues when you know they’re present, phosphorylation and glycosylation change local structure enough to matter.

For ligand input, use canonical SMILES notation rather than isomeric SMILES for better generalization. The model was trained on canonical representations from PDB, so matching that format improves accuracy. For metal ions, specify coordination geometry if known (tetrahedral zinc versus octahedral magnesium) in the configuration JSON. For flexible ligands with multiple conformers, provide all likely conformations and let AlphaFold 3 select the best fit rather than forcing a single input geometry.

Confidence score interpretation matters as much as the prediction itself. pLDDT above 90 indicates high confidence, likely experimentally accurate. pLDDT between 70 and 90 means moderate confidence, validate key interactions experimentally. pLDDT below 70 signals low confidence, the prediction requires experimental structure determination. Use PAE (Predicted Aligned Error) to identify rigid domains versus flexible linkers, high PAE between two regions means they’re probably not in contact.

Parameter constraints you need to know: maximum sequence length around 2,500 residues (longer sequences fail or timeout), maximum ligand size roughly 100 heavy atoms (larger molecules degrade accuracy), multi-chain limit around 10 chains (more chains increase error propagation). Avoid homopolymers like polyA DNA tracts, the model struggles with repetitive sequences that lack structural information.

What works reliably: protein-small molecule complexes (the sweet spot), antibody-antigen prediction (well-trained domain), protein-DNA/RNA with known binding motifs, metal-binding proteins if training data exists. What doesn’t work: intrinsically disordered proteins (confidence scores collapse), membrane proteins without templates (topology prediction fails), covalent inhibitors (model assumes non-covalent binding), allosteric conformational changes (static structure only), large assemblies over 2 MDa (computational limits).

The system prompt equivalent in AlphaFold 3 is the metadata configuration JSON. Set model_version to “alphafold3”, confidence_threshold to 0.7 for filtering low-quality predictions, include_ions to true if your complex has metals, relax_structure to false to skip energy minimization for speed, num_predictions to 5 to generate an ensemble for uncertainty quantification, and template_mode to “auto” to use PDB templates if available.

Local deployment requires A100-class GPUs and institutional access

Configuration Hardware Speed Cost
Minimum NVIDIA A100 40GB, 64GB RAM, 500GB SSD 60-180 seconds per prediction $10,000 GPU + $3,000 system
Recommended NVIDIA A100 80GB or H100, 128GB RAM, 1TB NVMe SSD 30-90 seconds per prediction $25,000 GPU + $5,000 system
Production 4x A100 80GB (multi-GPU), 256GB RAM, 2TB NVMe RAID 10-30 seconds (parallelized) $100,000+ cluster

The minimum configuration handles 90% of single-chain proteins but struggles with large complexes over 1,500 residues. The recommended setup handles all standard use cases including antibody-antigen complexes. The H100 provides 2x faster inference than A100 80GB, enabling real-time iteration for drug design workflows. Multi-GPU scaling is nearly linear up to 4 GPUs, with diminishing returns beyond that.

Software installation requires Linux, CUDA 11.8 or later, cuDNN 8.6 or later, and Python 3.10. The installation documentation walks through dependency setup. You’ll need roughly 1TB of storage for genetic databases (UniProt, PDB, BFD) that the model uses for multiple sequence alignments.

Critical limitation: model weights are not publicly available. Local deployment requires a research collaboration agreement with DeepMind, a non-commercial use attestation, and Google Cloud Storage access for downloading the weights (roughly 50GB). The weights aren’t downloadable via standard channels like Hugging Face or GitHub releases. As of March 2026, fewer than 50 institutions worldwide have local deployment access, according to university HPC documentation.

For quantization, FP16 (half precision) runs 2x faster and uses 20GB VRAM with less than 1% accuracy loss on most targets. INT8 quantization is not supported because diffusion models degrade significantly at lower precision. CPU-only inference is technically possible but takes 10-50x longer, making it impractical for iterative design workflows.

What AlphaFold 3 can’t predict and why it matters

Intrinsically disordered regions produce confidence scores below 50%. Proteins like tau and alpha-synuclein lack stable 3D structure, existing as dynamic ensembles rather than fixed conformations. This affects roughly 30% of human proteome regions involved in signaling and regulation. AlphaFold 3 will generate a structure for these regions, but the low pLDDT scores tell you the prediction is unreliable.

Novel folds without PDB templates hit 20-30% error rates. For protein families with no homologs in the training data (orphan folds), AlphaFold 3 performs only marginally better than random. This limits de novo protein design applications where you’re creating sequences that don’t exist in nature. If you’re engineering a completely new fold, you’ll need to validate computationally designed sequences experimentally rather than trusting the predictions.

Large assemblies over 2,000 residues show degraded accuracy. Ribosome (4,200 residues, 4.2 MDa) and proteasome (2,500 residues, 2.5 MDa) predictions are unreliable, with accuracy dropping to 50-60% compared to 80%+ for smaller complexes. The computational constraints come from memory limits and quadratic scaling of attention mechanisms. For very large assemblies, you’ll need to break the structure into subunits and predict them separately.

Membrane protein topology fails without experimental templates. AlphaFold 3 struggles to predict transmembrane helix orientation and lipid-facing residues for GPCRs and ion channels. Community benchmarks show membrane protein accuracy 15-20% lower than soluble proteins. If you’re working with membrane proteins, you’ll need template-based modeling or experimental structures.

Covalent modifications aren’t modeled correctly. The diffusion architecture assumes non-covalent interactions. Covalent inhibitors (acrylamide warheads, irreversible kinase inhibitors) and disulfide bonds in non-standard positions produce chemically implausible geometries. Pharma researchers report that roughly 5% of predictions need manual correction for clashing atoms or inverted chirality.

No dynamics or conformational ensembles. AlphaFold 3 predicts equilibrium structures, one snapshot per prediction. Proteins that switch between open and closed states (kinases, GPCRs, transporters) will only get one conformation. For allosteric mechanisms, protein folding pathways, or anything involving motion, you still need molecular dynamics simulations.

Data policies favor academic research over commercial applications

Training data comes from the Protein Data Bank (public domain) plus synthetic structures generated by DeepMind (proprietary). User data retention on AlphaFold Server: predictions aren’t stored beyond 30 days according to the terms of service. For the research API, data retention policies follow Google Cloud standards with a 90-day default, but exact terms aren’t publicly documented.

Enterprise agreements through Isomorphic Labs use custom data terms where pharma partners retain IP rights to predictions. This matters because pharmaceutical companies are uploading proprietary target sequences and drug candidates. The closed API prevents competitors from accessing that data, unlike open-source alternatives where anyone can inspect the training data and model behavior.

No public documentation exists for SOC 2, ISO 27001, or HIPAA compliance as of March 2026. Google Cloud infrastructure is GDPR-compliant, but AlphaFold 3 service-level compliance isn’t separately certified. For EU researchers, this creates uncertainty about whether predictions on patient-derived protein sequences meet regulatory requirements. The prohibited use policy covers restrictions but not compliance certifications.

Geographic restrictions apply due to U.S. export controls. Access is blocked in China and Russia because advanced AI models are subject to ITAR/EAR regulations. The dual-use concern: the same technology that enables drug discovery could predict structures of engineered toxins or pathogens. DeepMind has an internal review process for sensitive predictions, but the details aren’t public.

For academic institutions, no geographic restrictions apply for non-commercial research. Full access via AlphaFold Server works globally except in sanctioned countries. But commercial use requires enterprise partnerships regardless of location, creating a two-tier system where academic breakthroughs accelerate while commercial applications remain locked behind undisclosed agreements.

Version history and development timeline

Date Version Key Changes
May 8, 2024 AlphaFold 3 Initial Release Published in Nature; diffusion architecture; native ligand/nucleic acid/ion support; AlphaFold Server launched
November 2024 Source Code Release Inference pipeline published on GitHub; weights require separate access request; non-commercial license
October 2024 Nobel Prize Announcement Demis Hassabis and John Jumper awarded 2024 Chemistry Nobel for AlphaFold 2 (AlphaFold 3 not eligible due to timing)
July 2024 Server Capacity Expansion Increased daily prediction limits; added batch upload; 30% inference speed improvement

The May 2024 Nature paper established AlphaFold 3 as the most accurate biomolecular structure prediction system, with the official DeepMind timeline documenting the evolution from AlphaFold 2. The November code release created a path for institutional deployment but maintained closed weights, preserving DeepMind’s competitive advantage in drug discovery.

The Nobel Prize went to AlphaFold 2 work (2020-2021) because Nobel rules require a three-year lag between discovery and award. AlphaFold 3’s impact on drug discovery will likely be recognized in future awards, but the closed-source model creates a different legacy than AlphaFold 2’s open science approach.

Common questions about AlphaFold 3

Is AlphaFold 3 free to use?

Yes for non-commercial research through the AlphaFold Server, with a limit of 10 predictions per day. Research API access requires approval and has undisclosed pricing. Commercial use requires enterprise partnerships through Isomorphic Labs with undisclosed terms.

How accurate is AlphaFold 3 compared to AlphaFold 2?

AlphaFold 3 achieves 76% accuracy on protein-ligand binding poses versus effectively 0% for AlphaFold 2 (which doesn’t predict ligand binding). On protein-only structures, both perform similarly at 85-90% accuracy. The major improvement is multimodal complex prediction.

Can I run AlphaFold 3 on my own hardware?

Only with institutional approval from DeepMind. The inference code is open-source but model weights require a research collaboration agreement. You’ll need an NVIDIA A100 or better GPU. CPU-only inference is impractical due to 10-50x slower speed.

What’s the difference between AlphaFold 3 and RoseTTAFold?

AlphaFold 3 is more accurate (79% vs 65% on protein-nucleic complexes) but closed-source. RoseTTAFold is fully open-source and supports dynamics modeling. For pure accuracy, use AlphaFold 3. For customization and transparency, use RoseTTAFold.

Does AlphaFold 3 work for membrane proteins?

Only if similar structures exist in PDB. Without templates, accuracy drops 15-20% below soluble proteins. Transmembrane topology prediction is unreliable. For novel membrane proteins, you’ll need experimental structure determination.

Can AlphaFold 3 predict protein dynamics?

No. It produces static equilibrium structures only. For conformational changes, allosteric mechanisms, or folding pathways, you need molecular dynamics simulations. AlphaFold 3 predictions can serve as starting points for MD but don’t replace it.

Is my data safe when using AlphaFold Server?

Predictions aren’t stored beyond 30 days according to terms of service. Data is processed on Google Cloud infrastructure. No public compliance certifications exist for SOC 2 or HIPAA. For highly sensitive sequences, consider local deployment if you can get institutional access.

How long does a prediction take?

30-120 seconds on an A100 GPU depending on complex size. A 300-residue protein with small molecule takes 30 seconds. A 1,500-residue antibody-antigen complex takes 90 seconds. CPU-only inference takes 30-90 minutes for the same predictions.

alex morgan
I write about artificial intelligence as it shows up in real life โ€” not in demos or press releases. I focus on how AI changes work, habits, and decision-making once itโ€™s actually used inside tools, teams, and everyday workflows. Most of my reporting looks at second-order effects: what people stop doing, what gets automated quietly, and how responsibility shifts when software starts making decisions for us.