REDCap is the tool every academic lab learns to love by Tuesday and quietly resent by Friday. Vanderbilt built it in 2004 for a single investigator running a single survey. Two decades later it's de facto standard across thousands of institutions. But the genome of "one PI, one protocol, one IRB" never really left, and you can feel it the moment your study grows past two sites or three coordinators.
I've watched four research teams in the past year hit the same wall, usually around the same point of the project. Different labs, different therapeutic areas, mostly the same conversation. Below is the checklist I now run when someone says "we love REDCap, but…"
If three or more are true for you, you've outgrown it.
1. You have a folder of data dictionaries with names like crf_v3_FINAL_FINAL_apr_use_this.csv
REDCap forms are built by uploading a CSV data dictionary. That's elegant for one researcher with version control discipline, and it's a horror show for a team of eight people across three time zones. There's no "track changes" on the dictionary. There's no comments. There's no "who edited this branching logic and why."
The accepted workflow is for one designated person to own the dictionary, and everyone else emails them with change requests. That works for one PI. It does not work for a sponsor preparing a clinical protocol where every CRF change needs justification on file.
2. Cross-site coordination requires a spreadsheet about the spreadsheets
Multicenter REDCap is technically supported. In practice it means each site has its own DAG (Data Access Group), and the coordinator at the sponsor pulls everything together by exporting per site, deduplicating subject IDs by hand, and praying nobody renamed a field locally.
The teams I've talked to lose a working day a month to this. Not building anything. Just getting sites to agree on what column means what. That cost is invisible in the budget and very visible in your coordinator's calendar.
3. You can't see what someone else is editing right now
REDCap has no live collaboration. Two coordinators editing the same record at the same time produces last-write-wins, sometimes silently. The audit trail will tell you who overwrote whom, eventually, when you go look. Nobody gets notified in the moment.
Fine when you're the only one in the database. It's the source of every "wait, I just typed that, why is it gone" Slack message in a multi-coordinator team.
4. Protocol amendments don't propagate
You amend the protocol. Now you need to amend the CRF. In REDCap that means: edit the dictionary, re-upload, manually migrate existing records that used the old fields, and update the codebook everyone has been referring to.
No link between protocol document and CRF schema. You have a Word doc on SharePoint, a database in REDCap, and an SOP that says "remember to update both." Protocol amendments are the single biggest cause of IND delays we see — 2 to 6 months per cycle is the consultant rule of thumb. REDCap's design assumes amendments are rare. They're not.
5. The export-to-stats pipeline involves a person
Want your data in SAS, R, or Stata? REDCap exports a CSV plus a syntax file. You run the syntax. If anyone changed a field type since the last export, the syntax breaks. Variables get re-coded silently. Labels go missing.
Every team I've talked to has at least one person whose unofficial title is "the one who fixes the REDCap export when it breaks the night before DSMB." That's not a job. That's a tax.
6. The audit trail tells you what changed but not why
REDCap has a logging table. It's complete. It records every value change, every user login, every form submission. What it doesn't record is the reason for the change. The 21 CFR Part 11 guidance from FDA explicitly requires reason-for-change captured at the moment of edit, with electronic signature, for any clinical data that's part of an IND or NDA submission.
REDCap can be configured to prompt for a reason on edits, but it's a free-text field with no taxonomy, no structured query, no link to the protocol amendment that triggered the change. When FDA inspects, "user X changed field Y on date Z" is a starting point, not an answer.
7. There's no semantic layer
The dictionary tells REDCap that field ae_severity is a radio button with five options. It does not know that ae_severity is a CTCAE grade, that values 4 and 5 trigger expedited reporting under 21 CFR 312.32, or that "show me all SAEs that should have been reported within 7 days but weren't" is a question worth answering. You write that query yourself, in R, every time you need it.
Your statistical software has functions that know what CTCAE means. Your safety database has rules for expedited reporting timelines. The capture layer that feeds both of them does not. Everything downstream has gotten smarter; the source has stayed dumb.
8. IND prep feels like a translation project
This is the wall most academic-to-industry teams hit. The data is fine. The data is great. But REDCap was never structured to map onto the CTD Module 2 summaries or Module 5 study reports. So you copy. And paste. And reformat. And cite. For weeks.
We wrote a separate post on what that bridge actually looks like and how to compress it from weeks to days. The short version: the answer isn't "use a different EDC." It's about having an orchestration layer between your captured data and the dossier structure FDA expects.
So what do you do about it
Scored three or more? You've got two real options.
The first is to stay on REDCap and bolt on processes: a dedicated data manager, SOPs for the dictionary, a wrapper around the export pipeline, training so coordinators don't step on each other. It works. It also adds a salary line and doesn't fix the underlying "academic tool, industry workflow" problem.
The second is to move to something built for collaborative regulatory work. That's where Regfo sits. We're not an EDC replacement; we don't capture patient data. We sit on top of the data you already have and the documents you're drafting, and run both against the FDA/ICH rule set so you see your gaps before submission. Pre-IND, pre-amendment, pre-clinical-hold.
If you're at the "REDCap got us here, but the next 18 months will break it" point, paste a draft protocol into Regfo and you'll see the gap analysis run in 20 seconds. The bridge from research data to dossier, without making you re-type anything.
Related reading:
- From REDCap Data to IND Submission Without the Copy-Paste Marathon — the bridge piece
- FDA IND Submission Checklist 2026 — complete requirements by CTD module
- Common IND Deficiencies and How to Avoid Them — the patterns that trigger amendments