https://huggingface.co/datasets/Writer/FailSafeQA

\n","updatedAt":"2025-02-12T07:51:41.020Z","author":{"_id":"60e61b3969bd0df25c9375da","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1625692968400-noauth.jpeg","fullname":"Melisa Russak","name":"melisa","type":"user","isPro":false,"isHf":false,"isMod":false,"followerCount":28}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.619091272354126},"editors":["melisa"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1625692968400-noauth.jpeg"],"reactions":[],"isReport":false}},{"id":"67ad4c14277934723dcc2133","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isMod":false,"followerCount":194},"createdAt":"2025-02-13T01:34:12.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DeepThink: Aligning Language Models with Domain-Specific User Intents](https://huggingface.co/papers/2502.05497) (2025)\n* [The FACTS Grounding Leaderboard: Benchmarking LLMs' Ability to Ground Responses to Long-Form Input](https://huggingface.co/papers/2501.03200) (2025)\n* [Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains](https://huggingface.co/papers/2501.14431) (2025)\n* [LongReason: A Synthetic Long-Context Reasoning Benchmark via Context Expansion](https://huggingface.co/papers/2501.15089) (2025)\n* [Context Filtering with Reward Modeling in Question Answering](https://huggingface.co/papers/2412.11707) (2024)\n* [Breaking Focus: Contextual Distraction Curse in Large Language Models](https://huggingface.co/papers/2502.01609) (2025)\n* [QUENCH: Measuring the gap between Indic and Non-Indic Contextual General Reasoning in LLMs](https://huggingface.co/papers/2412.11763) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`","html":"

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

\n

The following papers were recommended by the Semantic Scholar API

\n\n

Please give a thumbs up to this comment if you found it helpful!

\n

If you want recommendations for any Paper on Hugging Face checkout this Space

\n

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: \n\n@librarian-bot\n\t recommend

\n","updatedAt":"2025-02-13T01:34:12.455Z","author":{"_id":"63d3e0e8ff1384ce6c5dd17d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg","fullname":"Librarian Bot (Bot)","name":"librarian-bot","type":"user","isPro":false,"isHf":false,"isMod":false,"followerCount":194}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7343248724937439},"editors":["librarian-bot"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1674830754237-63d3e0e8ff1384ce6c5dd17d.jpeg"],"reactions":[],"isReport":false}},{"id":"67ad5228c5915b75ad58aad0","author":{"_id":"640d3eaa3623f6a56dde856d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678589663024-640d3eaa3623f6a56dde856d.jpeg","fullname":"vansin","name":"vansin","type":"user","isPro":false,"isHf":false,"isMod":false,"followerCount":15},"createdAt":"2025-02-13T02:00:08.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Great !!!","html":"

Great !!!

\n","updatedAt":"2025-02-13T02:00:08.443Z","author":{"_id":"640d3eaa3623f6a56dde856d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1678589663024-640d3eaa3623f6a56dde856d.jpeg","fullname":"vansin","name":"vansin","type":"user","isPro":false,"isHf":false,"isMod":false,"followerCount":15}},"numEdits":0,"identifiedLanguage":{"language":"zh","probability":0.6832864880561829},"editors":["vansin"],"editorAvatarUrls":["https://cdn-avatars.huggingface.co/v1/production/uploads/1678589663024-640d3eaa3623f6a56dde856d.jpeg"],"reactions":[],"isReport":false}},{"id":"67b0ec4c40503f7c5baf05b5","author":{"_id":"648a210e9da3cc3506961585","avatarUrl":"/avatars/808e9d7ac99837fe79169d0b8d49c366.svg","fullname":"Ajith V Prabhakar","name":"ajithprabhakar","type":"user","isPro":false,"isHf":false,"isMod":false,"followerCount":1},"createdAt":"2025-02-15T19:34:36.000Z","type":"comment","data":{"edited":false,"hidden":false,"latest":{"raw":"Here is the article on ajithp.com featuring this paper: https://ajithp.com/2025/02/15/failsafeqa-evaluating-ai-hallucinations-robustness-and-compliance-in-financial-llms/","html":"

Here is the article on ajithp.com featuring this paper: https://ajithp.com/2025/02/15/failsafeqa-evaluating-ai-hallucinations-robustness-and-compliance-in-financial-llms/

\n","updatedAt":"2025-02-15T19:34:36.627Z","author":{"_id":"648a210e9da3cc3506961585","avatarUrl":"/avatars/808e9d7ac99837fe79169d0b8d49c366.svg","fullname":"Ajith V Prabhakar","name":"ajithprabhakar","type":"user","isPro":false,"isHf":false,"isMod":false,"followerCount":1}},"numEdits":0,"identifiedLanguage":{"language":"en","probability":0.7870907783508301},"editors":["ajithprabhakar"],"editorAvatarUrls":["/avatars/808e9d7ac99837fe79169d0b8d49c366.svg"],"reactions":[],"isReport":false}}],"primaryEmailConfirmed":false,"paper":{"id":"2502.06329","authors":[{"_id":"67ab4174757d2eb190af0375","user":{"_id":"621d6f532165dc431641e438","avatarUrl":"/avatars/56ccef10a8426d7160ef3586a771bd63.svg","isPro":false,"fullname":"Kiran Kamble","user":"kiranr","type":"user"},"name":"Kiran Kamble","status":"claimed_verified","statusLastChangedAt":"2025-02-12T09:16:55.367Z","hidden":false},{"_id":"67ab4174757d2eb190af0376","name":"Melisa Russak","hidden":false},{"_id":"67ab4174757d2eb190af0377","user":{"_id":"64f13a7c9be8cab82d9b5a55","avatarUrl":"/avatars/a00b5d386016697b4c4cc746bac16168.svg","isPro":false,"fullname":"Dmytro Mozolevskyi","user":"dmytro-writer","type":"user"},"name":"Dmytro Mozolevskyi","status":"claimed_verified","statusLastChangedAt":"2025-02-13T08:25:31.358Z","hidden":false},{"_id":"67ab4174757d2eb190af0378","user":{"_id":"6320a906a023aad6a7670e99","avatarUrl":"/avatars/48071559b0c7660bf6861cfe008b3006.svg","isPro":false,"fullname":"Muayad Sayed Ali","user":"muayad","type":"user"},"name":"Muayad Ali","status":"claimed_verified","statusLastChangedAt":"2025-02-12T09:16:53.157Z","hidden":false},{"_id":"67ab4174757d2eb190af0379","name":"Mateusz Russak","hidden":false},{"_id":"67ab4174757d2eb190af037a","user":{"_id":"60cd486d723acf5eb46fe8d3","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/60cd486d723acf5eb46fe8d3/Z1bD1kjvZ0QAOjZna41Xr.jpeg","isPro":false,"fullname":"Waseem AlShikh","user":"wassemgtk","type":"user"},"name":"Waseem AlShikh","status":"claimed_verified","statusLastChangedAt":"2025-02-19T09:05:02.858Z","hidden":false}],"publishedAt":"2025-02-10T10:29:28.000Z","submittedOnDailyAt":"2025-02-12T05:21:41.003Z","title":"Expect the Unexpected: FailSafe Long Context QA for Finance","submittedOnDailyBy":{"_id":"60e61b3969bd0df25c9375da","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1625692968400-noauth.jpeg","isPro":false,"fullname":"Melisa Russak","user":"melisa","type":"user"},"summary":"We propose a new long-context financial benchmark, FailSafeQA, designed to\ntest the robustness and context-awareness of LLMs against six variations in\nhuman-interface interactions in LLM-based query-answer systems within finance.\nWe concentrate on two case studies: Query Failure and Context Failure. In the\nQuery Failure scenario, we perturb the original query to vary in domain\nexpertise, completeness, and linguistic accuracy. In the Context Failure case,\nwe simulate the uploads of degraded, irrelevant, and empty documents. We employ\nthe LLM-as-a-Judge methodology with Qwen2.5-72B-Instruct and use fine-grained\nrating criteria to define and calculate Robustness, Context Grounding, and\nCompliance scores for 24 off-the-shelf models. The results suggest that\nalthough some models excel at mitigating input perturbations, they must balance\nrobust answering with the ability to refrain from hallucinating. Notably,\nPalmyra-Fin-128k-Instruct, recognized as the most compliant model, maintained\nstrong baseline performance but encountered challenges in sustaining robust\npredictions in 17% of test cases. On the other hand, the most robust model,\nOpenAI o3-mini, fabricated information in 41% of tested cases. The results\ndemonstrate that even high-performing models have significant room for\nimprovement and highlight the role of FailSafeQA as a tool for developing LLMs\noptimized for dependability in financial applications. The dataset is available\nat: https://huggingface.co/datasets/Writer/FailSafeQA","upvotes":126,"discussionId":"67ab4175757d2eb190af03ca","ai_keywords":["LLMs","FailSafeQA","Query Failure","Context Failure","domain expertise","linguistic accuracy","degraded documents","irrelevant documents","empty documents","LLM-as-a-Judge","Qwen2.5-72B-Instruct","fine-grained rating criteria","Robustness","Context Grounding","Compliance","Palmyra-Fin-128k-Instruct","hallucinating","OpenAI o3-mini"]},"canReadDatabase":false,"canManageCommunity":false,"canSubmit":false,"hasHfLevelAccess":false,"upvoted":false,"upvoters":[{"_id":"60e61b3969bd0df25c9375da","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1625692968400-noauth.jpeg","isPro":false,"fullname":"Melisa Russak","user":"melisa","type":"user"},{"_id":"621d6f532165dc431641e438","avatarUrl":"/avatars/56ccef10a8426d7160ef3586a771bd63.svg","isPro":false,"fullname":"Kiran Kamble","user":"kiranr","type":"user"},{"_id":"63805c4a9f1f158b014d63d8","avatarUrl":"/avatars/3221a1ed4d3f809993ba5c79c341f88d.svg","isPro":false,"fullname":"Umar Jamil","user":"hkproj","type":"user"},{"_id":"6418916985030eca6ac513c4","avatarUrl":"/avatars/ef82efc03218d7c714e1906e8ba0b174.svg","isPro":false,"fullname":"Chris Bryant","user":"chrisbryant","type":"user"},{"_id":"634ea96ed049354d7ee2e11d","avatarUrl":"/avatars/a985eb9ae94cc621adabd10765b1c450.svg","isPro":false,"fullname":"kirk goddard","user":"kirkg","type":"user"},{"_id":"6320a906a023aad6a7670e99","avatarUrl":"/avatars/48071559b0c7660bf6861cfe008b3006.svg","isPro":false,"fullname":"Muayad Sayed Ali","user":"muayad","type":"user"},{"_id":"66aac84fe7af78c1373ffb2f","avatarUrl":"/avatars/2d7bc83d99c4cfce0c84332e924f700e.svg","isPro":false,"fullname":"Maiko Cook","user":"maiko-writer","type":"user"},{"_id":"6679557009ccf03a817aadd5","avatarUrl":"/avatars/637ebb93bb271a8451ac5607e4727345.svg","isPro":false,"fullname":"Ramiro Medina","user":"ramedina86","type":"user"},{"_id":"66c6e5f3f3f9994f2ad60dd1","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/9k4GG9jU7UhI-yl-4vGM7.png","isPro":false,"fullname":"Alexandre Rousseau","user":"madeindjs","type":"user"},{"_id":"63e501c4c7dda036b3671640","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/1675952560803-noauth.jpeg","isPro":false,"fullname":"May Habib","user":"mayhabib","type":"user"},{"_id":"665b133508d536a8ac804f7d","avatarUrl":"https://cdn-avatars.huggingface.co/v1/production/uploads/noauth/Uwi0OnANdTbRbHHQvGqvR.png","isPro":false,"fullname":"Paulson","user":"Pnaomi","type":"user"},{"_id":"66c6555b06149e7c2ae17325","avatarUrl":"/avatars/8fad719583e1d27c338edb03546dbb6d.svg","isPro":false,"fullname":"Diego Lomanto","user":"diego-lomanto","type":"user"}],"acceptLanguages":["en","*"],"dailyPaperRank":1}">
Papers
arxiv:2502.06329

Expect the Unexpected: FailSafe Long Context QA for Finance

Published on Feb 10
· Submitted by melisa on Feb 12
#1 Paper of the day
Authors:
,
,

Abstract

We propose a new long-context financial benchmark, FailSafeQA, designed to test the robustness and context-awareness of LLMs against six variations in human-interface interactions in LLM-based query-answer systems within finance. We concentrate on two case studies: Query Failure and Context Failure. In the Query Failure scenario, we perturb the original query to vary in domain expertise, completeness, and linguistic accuracy. In the Context Failure case, we simulate the uploads of degraded, irrelevant, and empty documents. We employ the LLM-as-a-Judge methodology with Qwen2.5-72B-Instruct and use fine-grained rating criteria to define and calculate Robustness, Context Grounding, and Compliance scores for 24 off-the-shelf models. The results suggest that although some models excel at mitigating input perturbations, they must balance robust answering with the ability to refrain from hallucinating. Notably, Palmyra-Fin-128k-Instruct, recognized as the most compliant model, maintained strong baseline performance but encountered challenges in sustaining robust predictions in 17% of test cases. On the other hand, the most robust model, OpenAI o3-mini, fabricated information in 41% of tested cases. The results demonstrate that even high-performing models have significant room for improvement and highlight the role of FailSafeQA as a tool for developing LLMs optimized for dependability in financial applications. The dataset is available at: https://huggingface.co/datasets/Writer/FailSafeQA

Community

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Great !!!

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2502.06329 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 2

Collections including this paper 9