EOS Transactions Stuck in 'Signed' or Failed State

EOS Transactions Stuck in "Signed" or Failed State

Problem

Customers report EOS (and occasionally Vaulta/tEOS) withdrawal transactions that remain stuck in a "signed" state for extended periods (hours) without being broadcast to the EOS blockchain, or transactions that fail outright. The transfer appears in the BitGo UI as "signed" but never confirms on-chain. In some cases, incoming EOS deposits also fail to appear in BitGo wallets due to indexer issues. This is a recurring pattern affecting the EOS coin on BitGo's platform, typically caused by node connectivity problems, transaction expiry, or insufficient balance at broadcast time.

Diagnostics

  • Open the internal send-queue diagnostic tool and look up the transaction hash. Check the state field — common values are attempted (still retrying) or failed (permanently failed).
  • Inspect the error field in the send queue entry. Key error messages to distinguish root causes:
    • Failed to connect to full node -- java.net.SocketTimeoutException: Read timed out → node connectivity/timeout issue
    • Unsupported response code. Got 400 for POST with body/compression must be boolean → EOS node configuration issue
    • expired_tx_exception → transaction validity window (1 hour for EOS) elapsed before broadcast succeeded
    • eosio_assert_message_exception with assertion failure with message: overdrawn balance → insufficient funds at time of broadcast
    • assert_exception with is_canonical( c ): signature is not canonical → signing/canonicalization bug (requires engineering)
  • Check the attempts count — a high number (hundreds/thousands) with a timeout error indicates persistent node unreachability.
  • Verify the wallet transfer record status: signed means still in queue; failed means permanently failed and safe to retry.
  • Check https://status.bitgo.com/ for any active EOS incidents or recent resolved incidents.
  • For missing incoming deposits, check whether the EOS indexer is operational by reviewing recent Slack threads in the relevant DevOps/ETH channels or the status page.

Resolution


Scenario: eos-transaction-signed-failed#node-connectivity-timeout

Trigger: The send queue shows state=attempted with error containing "Failed to connect to full node -- java.net.SocketTimeoutException: Read timed out" and a high attempt count.

Signals: SocketTimeoutException, Read timed out, Failed to connect to full node, state=attempted, EOS signed stuck, pending

Steps:

  1. Confirm the transaction is stuck in state=attempted in the send queue with the timeout error.
  2. Escalate to the DevOps/Engineering team to investigate EOS node connectivity. Reference the relevant internal Slack channels (e.g., DevOps channel).
  3. Once the node issue is resolved by engineering, check whether the transaction has expired (EOS transactions have a validity window of approximately 1 hour).
  4. If the transaction has expired, request engineering to move the transaction to a failed state so the customer can retry.
  5. Inform the customer that the transaction failed due to node connectivity issues and that they should reinitiate the transaction.

Notes: EOS transfers have a rather short validity window of 1 hour and will fail if they do not get confirmed within that window. Once the node issue is fixed, expired transactions cannot be recovered — they must be retried.

"The transaction fail to hit the node so I have raised this transaction to engineering to get the transaction put into a failed state so you can try create the transaction again." (ticket #206913)

"This was caused by an issue with our EOS nodes which we have since fixed. However, the transaction in question had expired and failed. Please reinitiate the transaction as required and it should go through." (ticket #274550)

"EOS transfers have a rather short validity window of 1 hour and will fail if it does not get confirmed." (ticket #274550)


Scenario: eos-transaction-signed-failed#expired-tx-exception

Trigger: The send queue shows state=failed with error "expired_tx_exception" indicating the transaction exceeded its validity window.

Signals: expired_tx_exception, Operation failed after multiple attempts and will never succeed, EOS failed, expiry

Steps:

  1. Confirm the transaction shows state=failed with expired_tx_exception in the error field.
  2. Verify on the EOS blockchain explorer (e.g., bloks.io) that the transaction hash is not in the block chain.
  3. Confirm that the transaction has already been moved to failed state (no further action needed on the backend).
  4. Inform the customer that the transaction failed due to expiry time being reached, that it will not confirm, and that it can be re-triggered as needed.
  5. If there is a pattern of repeated expiry failures, escalate to engineering to investigate whether there is an underlying node issue causing broadcast delays.

Notes: This is the expected EOS behavior when a transaction cannot be broadcast within its ~1 hour validity window. No funds are lost — the transaction simply never executed on-chain.

"We are reporting this transaction has failed due to expiry time being reached. This transaction will not confirm and can be re-triggered as needed." (ticket #221474)


Scenario: eos-transaction-signed-failed#overdrawn-balance

Trigger: The send queue shows state=failed with error "eosio_assert_message_exception" and message "assertion failure with message: overdrawn balance".

Signals: overdrawn balance, eosio_assert_message_exception, cf_system.cpp, insufficient funds, EOS failed

Steps:

  1. Confirm the error message in the send queue contains assertion failure with message: overdrawn balance.
  2. Review the wallet's transaction history around the time of the failed transaction. Look for multiple concurrent outgoing transactions that collectively exceeded the available balance.
  3. Explain to the customer that the transaction failed because, at the moment of broadcast, the wallet did not have sufficient EOS balance (likely due to multiple concurrent sends before incoming deposits settled).
  4. Confirm that the transaction is in failed state and the customer can safely retry now that the balance has been replenished or other transactions have settled.

Notes: This commonly occurs when multiple large withdrawals are initiated simultaneously before pending incoming deposits are credited. The on-chain balance at broadcast time was insufficient even though the wallet may show adequate balance afterward.

"The failed transaction happened at 2023-06-19T02:54:12Z which seems there wasn't enough balance to do this transaction at that time, as it was right before the two deposits to the wallets." (ticket #228173)


Scenario: eos-transaction-signed-failed#signature-not-canonical

Trigger: The send queue shows state=failed with error "assert_exception" and message "is_canonical( c ): signature is not canonical".

Signals: is_canonical, signature is not canonical, assert_exception, elliptic_secp256k1.cpp, EOS failed

Steps:

  1. Confirm the error in the send queue references elliptic_secp256k1.cpp with message is_canonical( c ): signature is not canonical.
  2. This is a platform-level signing bug that the customer cannot resolve themselves. Escalate immediately to the Engineering team.
  3. Reference any existing JIRA tickets or code-red incidents related to EOS canonical signature issues.
  4. Inform the customer that the Engineering team is investigating and that the failed transaction can be retried once a fix is deployed.

Notes: This issue was associated with a code-red incident (cr-1124) and required an engineering fix. Multiple customers were affected simultaneously.

"We have fixed the cause of these failed EOS transactions and you should no longer see them going forward." (ticket #263434)


Scenario: eos-transaction-signed-failed#indexer-delay-deposits-missing

Trigger: Customer reports incoming EOS deposits are not appearing in their BitGo wallet despite being confirmed on the EOS blockchain explorer.

Signals: EOS deposits missing, indexer, receives not appearing, bloks.io confirmed, EOS receive not found

Steps:

  1. Ask the customer for the transaction hash and verify it is confirmed on the EOS blockchain explorer (e.g., bloks.io).
  2. Check https://status.bitgo.com/ for any active or recently resolved EOS indexer incidents.
  3. If an incident is active, inform the customer that there are delays in EOS sends and receives and direct them to follow the status page for updates.
  4. If no incident is posted but the issue is confirmed, escalate to engineering/DevOps as a potential EOS indexer issue.
  5. Once the indexer issue is resolved, deposits should appear automatically. Follow up with the customer to confirm.

Notes: This affected both sends and receives during indexer outages. The resolution is server-side and does not require customer action beyond waiting and retrying.

"We are currently experiencing delays in EOS sends and receives. Meanwhile, please follow our status page for updates https://status.bitgo.com/" (ticket #255098)

"This was caused by an issue with our EOS indexer which is now fixed and all transactions should be confirmed now." (ticket #226791)


Scenario: eos-transaction-signed-failed#node-400-compression-error

Trigger: The send queue shows error with HTTP 400 response containing "body/compression must be boolean" from the EOS node push_transaction endpoint.

Signals: Unsupported response code, Got 400, body/compression must be boolean, push_transaction, FST_ERR_VALIDATION, EOS node

Steps:

  1. Confirm the send queue error references a 400 response from the /v1/chain/push_transaction endpoint with message body/compression must be boolean.
  2. Escalate to DevOps/Engineering — this indicates a configuration issue with the EOS full node (likely a node software version incompatibility or proxy configuration error).
  3. Once DevOps resolves the node configuration, check whether the transaction has expired.
  4. If expired, inform the customer the transaction failed and they should reinitiate.

Notes: This is distinct from a simple timeout — it indicates the node is reachable but rejecting the request format. Requires node-side configuration fix.

Related