- Executive Summary
- Architecture Deep Dive
- Smart Contract Layer
- Relay Core Implementation
- Porto SDK Integration
- Advanced Features
- Development Workflows
- Production Deployment
- Troubleshooting & Debugging
The Ithaca Relay is a sophisticated cross-chain transaction router that serves as the backbone for EIP-7702 account abstraction in the Porto ecosystem. It transforms Ethereum accounts into programmable, cross-chain entities with advanced features like fee abstraction, intent-based execution, and multi-chain coordination.
- Intent-Based Architecture: Users express desired outcomes rather than explicit transactions
- EIP-7702 Account Delegation: Enables advanced account programmability without breaking existing tooling
- Cross-Chain Settlement: Automatic funding and execution across multiple chains via LayerZero
- Fee Abstraction: Payment in any supported token with sponsor capabilities
- Advanced Security: Multi-signature support, spending limits, and granular permissions
graph TB
subgraph "Client Layer"
SDK[Porto SDK]
Wagmi[Wagmi Integration]
Browser[Browser Extension]
end
subgraph "Relay Service"
RPC[RPC Server]
TX[Transaction Processor]
QUOTE[Quote Engine]
SETTLE[Settlement Coordinator]
end
subgraph "Smart Contracts"
ORCH[Orchestrator]
ACC[Account Implementation]
SIM[Simulator]
ESC[Escrow]
end
subgraph "Infrastructure"
DB[(PostgreSQL)]
METRICS[Prometheus]
LZ[LayerZero]
ORACLE[Price Oracle]
end
SDK --> RPC
Wagmi --> RPC
Browser --> RPC
RPC --> TX
RPC --> QUOTE
TX --> SETTLE
QUOTE --> ORCH
TX --> ORCH
SETTLE --> ESC
TX --> DB
QUOTE --> ORACLE
SETTLE --> LZ
RPC --> METRICS
- Intent-Driven Execution: Users describe what they want to achieve, not how to achieve it
- Trustless Operation: No trust assumptions between relay and users
- Competitive Filling: Multiple relays can compete to fill the same intent
- Cross-Chain Native: Designed from the ground up for multi-chain operations
- Backward Compatibility: Works with existing EOAs and tooling
sequenceDiagram
participant U as User
participant SDK as Porto SDK
participant R as Relay
participant O as Orchestrator
participant S as Simulator
participant B as Blockchain
U->>SDK: sendCalls(calls)
SDK->>R: prepareCalls(intent)
R->>S: simulate execution
S-->>R: gas estimate + asset diffs
R->>R: calculate fees & quote
R-->>SDK: signed quote + digest
SDK->>U: request signature
U->>SDK: sign digest
SDK->>R: sendPreparedCalls(signature)
R->>R: verify signatures
R->>B: broadcast transaction
B->>O: execute intent
O-->>B: transaction receipt
R-->>SDK: bundle_id
SDK->>R: getCallsStatus(bundle_id)
R-->>SDK: execution status
The relay implements a sophisticated 2D nonce system that enables both ordered and unordered execution:
// Nonce Memory Layout (256 bits total)
// ┌─────────────────────────────────────────────────────────────────┐
// │ Sequence Key (192 bits) │ Sequential Nonce (64 bits) │
// ├────────────────┬────────────────────┬─────────────────────────────┤
// │ Multichain │ Random/User-Defined │ Auto-incrementing Counter │
// │ Flag (16 bits) │ (176 bits) │ (64 bits) │
// └────────────────┴────────────────────┴─────────────────────────────┘
pub const MULTICHAIN_NONCE_PREFIX: U256 = uint!(0xc1d0_U256);
// Example nonces:
// Single-chain: 0x0000...0001 (sequence 0, nonce 1)
// Multi-chain: 0xc1d0...0001 (multichain sequence, nonce 1)
// User-defined: 0xabcd...0001 (user sequence 0xabcd, nonce 1)
Key Features:
- Ordering Control: Same sequence key = ordered execution, different keys = parallel execution
- Multichain Support: Special prefix excludes chain ID from EIP-712 domain
- Flexibility: Users can define custom sequence keys for application-specific ordering
The Orchestrator is the central coordination contract that manages intent execution:
contract OrchestratorContract {
event IntentExecuted(
address indexed eoa,
uint256 indexed nonce,
bool incremented,
bytes4 err
);
function execute(bytes calldata encodedIntent)
external payable returns (bytes4 err);
function eip712Domain() external view returns (...);
function accountImplementationOf(address eoa) external view returns (address);
function pause(bool isPause) external;
}
Core Responsibilities:
- Intent Validation: Verifies signatures and nonce management
- Payment Processing: Handles pre/post execution payments
- Delegation Verification: Ensures accounts use supported implementations
- Error Handling: Returns specific error codes for debugging
- Emergency Controls: Pausable for security incidents
Execution Flow:
// 1. Decode intent from calldata
let intent = Intent::abi_decode(encoded_intent)?;
// 2. Validate nonce and prevent replay
self.validate_nonce(intent.eoa, intent.nonce)?;
// 3. Process pre-payment
self.process_payment(&intent, PaymentPhase::Pre)?;
// 4. Execute precalls (key authorization, permissions)
for precall in &intent.encoded_pre_calls {
self.execute_precall(precall)?;
}
// 5. Execute main calls on account
self.execute_on_account(intent.eoa, &intent.execution_data)?;
// 6. Process final payment
self.process_payment(&intent, PaymentPhase::Post)?;
// 7. Emit execution event
emit IntentExecuted(intent.eoa, intent.nonce, true, NO_ERROR);
The IthacaAccount provides programmable account functionality:
contract IthacaAccount {
enum SpendPeriod { Minute, Hour, Day, Week, Month, Year }
struct SpendInfo {
address token;
SpendPeriod period;
uint256 limit;
uint256 spent;
uint256 lastUpdated;
uint256 currentSpent;
uint256 current;
}
function setSpendLimit(bytes32 keyHash, address token, SpendPeriod period, uint256 limit) external;
function setCanExecute(bytes32 keyHash, address target, bytes4 fnSel, bool can) external;
function spendAndExecuteInfos(bytes32[] calldata keyHashes) external view returns (...);
function unwrapAndValidateSignature(bytes32 digest, bytes calldata signature) external view returns (bool, bytes32);
}
Key Features:
-
Hierarchical Key Management:
pub struct Key { pub keyType: KeyType, // Secp256k1, WebAuthnP256 pub scheme: SignatureScheme, // ECDSA, WebAuthn pub key: Bytes, // Public key data pub keyFlags: U256, // Admin flags pub metadata: Bytes, // Additional context }
-
Granular Permissions:
pub enum Permission { Spend(SpendPermission { token: Address, limit: U256, period: SpendPeriod, }), Call(CallPermission { to: Address, selector: FixedBytes<4>, }), }
-
Signature Validation:
pub struct Signature { pub inner_signature: Bytes, // Raw signature bytes pub key_hash: B256, // Key identifier pub prehash: bool, // Whether digest is pre-hashed }
The Simulator enables accurate off-chain gas estimation:
contract Simulator {
function simulateV1Logs(
address orchestrator,
bool refundUnusedGas,
uint8 paymentPerGasPrecision,
uint256 paymentPerGas,
uint256 gasOffset,
uint256 gasValidationOffset,
bytes calldata encodedIntent
) external returns (SimulationResult);
}
Simulation Process:
// The relay uses sophisticated simulation for accurate gas estimates
pub async fn simulate_execute(
&self,
simulator: Address,
intent: &Intent,
key_type: KeyType,
payment_per_gas: f64,
asset_info_handle: AssetInfoServiceHandle,
) -> Result<(AssetDiffs, SimulationResult), RelayError> {
// Account for gas variation in P256 vs secp256k1
let gas_validation_offset = if key_type.is_secp256k1() {
U256::ZERO
} else {
P256_GAS_BUFFER
};
// Build simulation transaction with state overrides
let simulate_block = SimBlock::default()
.call(simulator_call)
.with_state_overrides(self.overrides.clone());
// Execute simulation and decode results
let result = self.provider().simulate(&simulate_block).await?;
// Calculate asset diffs from logs
let asset_diffs = asset_info_handle
.calculate_asset_diff(simulate_block, result.logs, provider)
.await?;
Ok((asset_diffs, simulation_result))
}
Manages funds during cross-chain operations:
pub struct Escrow {
pub salt: [u8; 16], // Unique identifier
pub depositor: Address, // Account depositing funds
pub recipient: Address, // Fund recipient (funder contract)
pub token: Address, // Asset being escrowed
pub settler: Address, // Settlement coordinator
pub sender: Address, // Original orchestrator
pub settlement_id: B256, // Intent digest
pub sender_chain_id: U256, // Destination chain
pub escrow_amount: U256, // Amount to escrow
pub refund_amount: U256, // Amount refundable
pub refund_timestamp: U256, // Refund deadline
}
Coordinates fund transfers across chains:
contract Funder {
function fundTransfers(
bytes calldata signature,
Transfer[] calldata transfers
) external;
}
The relay exposes a comprehensive RPC API implementing the wallet_*
namespace:
wallet_prepareCalls
- Intent preparation and quotingwallet_sendPreparedCalls
- Intent executionwallet_getCallsStatus
- Execution monitoringwallet_getCapabilities
- Relay capabilitieswallet_getKeys
- Account key managementwallet_getAssets
- Multi-chain asset queryingwallet_upgradeAccount
- Account initialization
Quote Generation Process:
async fn estimate_fee(
&self,
intent: PartialIntent,
chain_id: ChainId,
prehash: bool,
context: FeeEstimationContext,
) -> Result<(AssetDiffs, Quote), RelayError> {
// 1. Validate chain support and fee token
let chain = self.chains.get(chain_id)?;
let token = self.fee_tokens.find(chain_id, &context.fee_token)?;
// 2. Create mock key for simulation
let mock_key = KeyWith712Signer::random_admin(context.account_key.keyType)?;
// 3. Build state overrides for simulation
let overrides = StateOverridesBuilder::with_capacity(2)
.append(self.simulator(), AccountOverride::default().with_balance(U256::MAX))
.append(intent.eoa, AccountOverride::default()
.with_balance(U256::MAX.div_ceil(2))
.with_state_diff(context.account_key.storage_slots())
.with_code_opt(context.authorization_address.map(|addr| {
Bytes::from([&EIP7702_DELEGATION_DESIGNATOR, addr.as_slice()].concat())
}))
)
.build();
// 4. Parallel execution of validation steps
let (orchestrator, delegation, fee_history, eth_price) = try_join4(
// Validate orchestrator support
async { self.get_supported_orchestrator(&account).await },
// Validate delegation implementation
self.has_supported_delegation(&account),
// Fetch current gas prices
provider.get_fee_history(EIP1559_FEE_ESTIMATION_PAST_BLOCKS, Default::default(), &[self.priority_fee_percentile]),
// Get token price in ETH
async { self.price_oracle.eth_price(token.kind).await },
).await?;
// 5. Calculate gas prices and payments
let native_fee_estimate = Eip1559Estimator::default().estimate(
fee_history.latest_block_base_fee().unwrap_or_default(),
&fee_history.reward.unwrap_or_default(),
);
// 6. Build and sign intent for simulation
let mut intent_to_sign = Intent {
eoa: intent.eoa,
execution_data: intent.execution_data.clone(),
nonce: intent.nonce,
payer: intent.payer.unwrap_or_default(),
payment_token: token.address,
payment_recipient: self.fee_recipient,
supported_account_implementation: delegation,
encoded_pre_calls: intent.pre_calls.into_iter()
.map(|pc| pc.abi_encode().into()).collect(),
encoded_fund_transfers: intent.fund_transfers.into_iter()
.map(|(token, amount)| Transfer { token, amount }.abi_encode().into()).collect(),
..Default::default()
};
// 7. Handle multichain context
if let IntentKind::MultiOutput { settler_context, .. } = &context.intent_kind {
let interop = self.chains.interop()?;
intent_to_sign.settler = interop.settler_address();
intent_to_sign.settler_context = settler_context.clone();
}
// 8. Estimate L1 data availability fees (for L2s)
let extra_payment = self.estimate_extra_fee(&chain, &intent_to_sign).await?;
// 9. Simulate execution to get accurate gas estimates
let (asset_diff, sim_result) = orchestrator
.simulate_execute(
self.simulator(),
&intent_to_sign,
context.account_key.keyType,
payment_per_gas,
self.asset_info.clone(),
)
.await?;
// 10. Build final quote
let quote = Quote {
chain_id,
payment_token_decimals: token.decimals,
intent: intent_to_sign,
extra_payment,
eth_price,
tx_gas: gas_estimate.tx,
native_fee_estimate,
authorization_address: context.authorization_address,
orchestrator: *orchestrator.address(),
};
Ok((asset_diff, quote))
}
The transaction service implements a sophisticated queuing and processing system:
pub struct TransactionService {
chains: Chains,
storage: RelayStorage,
fee_escalation: FeeEscalationConfig,
max_retries: usize,
retry_delay: Duration,
}
impl TransactionService {
pub async fn send_transaction(&self, tx: RelayTransaction) -> Result<(), TransactionError> {
// 1. Store transaction
self.storage.add_transaction(tx.clone()).await?;
// 2. Add to processing queue
self.queue_transaction(tx).await?;
Ok(())
}
async fn process_transaction_queue(&self) -> Result<(), TransactionError> {
loop {
// Get next pending transaction
let tx = self.storage.get_next_pending_transaction().await?;
// Process with retry logic
let result = self.execute_with_retry(tx).await;
// Update status
self.storage.update_transaction_status(tx.id, result).await?;
tokio::time::sleep(self.processing_interval).await;
}
}
async fn execute_with_retry(&self, tx: RelayTransaction) -> TransactionStatus {
let mut attempts = 0;
while attempts < self.max_retries {
match self.submit_transaction(&tx).await {
Ok(hash) => {
return TransactionStatus::Submitted { hash, block: None };
}
Err(e) if e.is_retriable() => {
attempts += 1;
tokio::time::sleep(self.retry_delay * attempts).await;
}
Err(e) => {
return TransactionStatus::Failed { error: e.to_string() };
}
}
}
TransactionStatus::Failed { error: "Max retries exceeded".to_string() }
}
}
The relay implements sophisticated fee escalation to ensure transaction inclusion:
pub struct FeeEscalationConfig {
pub initial_multiplier: f64, // 1.1 (10% increase)
pub max_multiplier: f64, // 3.0 (300% of original)
pub escalation_interval: Duration, // 30 seconds
pub escalation_rate: f64, // 1.2 (20% increase per interval)
}
impl RelayTransaction {
pub fn escalate_fees(&mut self, config: &FeeEscalationConfig, attempt: u32) -> bool {
let multiplier = (config.initial_multiplier * config.escalation_rate.powi(attempt as i32))
.min(config.max_multiplier);
if multiplier <= config.max_multiplier {
self.max_fee_per_gas = (self.max_fee_per_gas as f64 * multiplier) as u128;
self.max_priority_fee_per_gas = (self.max_priority_fee_per_gas as f64 * multiplier) as u128;
true
} else {
false
}
}
}
The relay supports multiple settlement mechanisms:
#[async_trait]
pub trait Settler: Send + Sync + std::fmt::Debug {
fn id(&self) -> SettlerId;
fn address(&self) -> Address;
async fn build_execute_send_transaction(
&self,
settlement_id: B256,
current_chain_id: u64,
source_chains: Vec<u64>,
orchestrator: Address,
) -> Result<Option<RelayTransaction>, SettlementError>;
fn encode_settler_context(&self, destination_chains: Vec<u64>) -> Result<Bytes, SettlementError>;
async fn wait_for_verifications(
&self,
bundle: &InteropBundle,
timeout: Duration,
) -> Result<VerificationResult, SettlementError>;
async fn build_execute_receive_transactions(
&self,
bundle: &InteropBundle,
) -> Result<Vec<RelayTransaction>, SettlementError>;
}
pub struct LayerZeroSettler {
endpoints: HashMap<ChainId, LayerZeroEndpoint>,
verification_timeout: Duration,
settler_address: Address,
}
impl LayerZeroSettler {
pub async fn wait_for_verifications(
&self,
bundle: &InteropBundle,
timeout: Duration,
) -> Result<VerificationResult, SettlementError> {
let start = Instant::now();
let mut verified_chains = HashSet::new();
let target_chains: HashSet<_> = bundle.destination_chains().collect();
while start.elapsed() < timeout && verified_chains.len() < target_chains.len() {
for &chain_id in &target_chains {
if verified_chains.contains(&chain_id) {
continue;
}
if self.check_message_verification(bundle.settlement_id(), chain_id).await? {
verified_chains.insert(chain_id);
}
}
tokio::time::sleep(Duration::from_secs(5)).await;
}
Ok(VerificationResult {
verified_chains: verified_chains.into_iter().collect(),
failed_chains: target_chains.difference(&verified_chains.into_iter().collect()).copied().collect(),
total_duration: start.elapsed(),
})
}
}
The relay implements a sophisticated algorithm to source funds across chains:
async fn source_funds(
&self,
eoa: Address,
request_key: &CallKey,
assets: GetAssetsResponse,
destination_chain_id: ChainId,
requested_asset: AddressOrNative,
amount: U256,
) -> Result<Option<Vec<FundSource>>, RelayError> {
// 1. Check existing balance on destination chain
let existing = assets.balance_on_chain(destination_chain_id, requested_asset);
let mut remaining = amount.saturating_sub(existing);
if remaining.is_zero() {
return Ok(Some(vec![])); // No funding needed
}
// 2. Collect funding sources sorted by balance (highest first)
let mut sources: Vec<(ChainId, U256)> = assets.chains()
.filter_map(|(&chain, assets)| {
if chain == destination_chain_id { return None; }
let balance = assets.find_asset_balance(requested_asset)?;
if balance.is_zero() { None } else { Some((chain, balance)) }
})
.collect();
sources.sort_unstable_by(|a, b| b.1.cmp(&a.1));
// 3. Iteratively build funding plan
let mut plan = Vec::new();
for (chain, balance) in sources {
if remaining.is_zero() { break; }
// Simulate escrow cost for this chain
let funding_context = FundingIntentContext {
eoa, chain_id: chain, asset: requested_asset.into(),
amount: U256::from(1), // Minimum for cost estimation
fee_token: requested_asset.address(),
output_intent_digest: B256::default(),
output_chain_id: destination_chain_id,
};
let escrow_cost = self.estimate_escrow_cost(funding_context, request_key).await?;
let take = remaining.min(balance.saturating_sub(escrow_cost));
if !take.is_zero() {
plan.push(FundSource {
chain_id: chain,
amount: take,
address: requested_asset.address(),
cost: escrow_cost,
});
remaining = remaining.saturating_sub(take);
}
}
if remaining.is_zero() { Ok(Some(plan)) } else { Ok(None) }
}
The Porto SDK provides multiple integration patterns for different use cases:
import { createConfig } from 'wagmi'
import { portoConnector } from 'porto/wagmi'
const config = createConfig({
connectors: [
portoConnector({
mode: 'rpc', // or 'dialog'
endpoint: 'https://relay.ithaca.xyz'
})
],
// ... other config
})
// Usage in React components
function SendTransaction() {
const { sendTransaction } = useSendTransaction()
const { data: hash } = useSendTransaction({
mutation: {
onSuccess: (hash) => {
console.log('Transaction sent:', hash)
}
}
})
const handleSend = () => {
sendTransaction({
to: '0x...',
value: parseEther('0.1'),
data: '0x...'
})
}
return <button onClick={handleSend}>Send Transaction</button>
}
import { Porto } from 'porto'
// Create Porto instance
const porto = Porto.create({
mode: 'rpc',
endpoint: 'https://relay.ithaca.xyz'
})
// Send transactions
async function sendCalls() {
const result = await porto.sendCalls({
calls: [
{
to: '0x...',
value: '1000000000000000000', // 1 ETH in wei
data: '0x...'
}
],
chainId: 1
})
console.log('Bundle ID:', result.id)
// Monitor status
const status = await porto.getCallsStatus({ id: result.id })
console.log('Status:', status)
}
For applications requiring user interaction:
const porto = Porto.create({
mode: 'dialog',
endpoint: 'https://dialog.ithaca.xyz'
})
// Automatically opens dialog for user approval
await porto.sendCalls({
calls: [...],
chainId: 1
})
// Connect to existing account
await porto.connect()
// Get account capabilities
const capabilities = await porto.getCapabilities()
// Manage permissions
await porto.grantPermissions({
permissions: [
{
type: 'spend',
token: '0x...',
limit: '1000000000000000000', // 1 ETH
period: 'day'
},
{
type: 'call',
target: '0x...',
selector: '0xa9059cbb' // transfer(address,uint256)
}
]
})
// Execute multiple operations atomically
await porto.sendCalls({
calls: [
// Approve token spending
{
to: tokenAddress,
data: encodeCall('approve', [spenderAddress, amount])
},
// Execute swap
{
to: dexAddress,
data: encodeCall('swap', [tokenA, tokenB, amount])
},
// Stake received tokens
{
to: stakingAddress,
data: encodeCall('stake', [amount])
}
]
})
// Automatically sources funds from multiple chains
await porto.sendCalls({
calls: [
{
to: nftMarketplace,
value: parseEther('5'), // Requires 5 ETH
data: encodeCall('buyNFT', [tokenId])
}
],
chainId: 1, // Execute on mainnet
requiredFunds: [
{
asset: 'native', // ETH
amount: parseEther('5')
}
]
})
// SDK automatically sources ETH from other chains if needed
The relay includes sophisticated liquidity management for cross-chain operations:
pub struct LiquidityTracker {
chains: HashMap<ChainId, ChainLiquidity>,
rebalance_thresholds: RebalanceConfig,
bridge_integrations: Vec<Box<dyn BridgeIntegration>>,
}
pub struct ChainLiquidity {
pub native_balance: U256,
pub token_balances: HashMap<Address, U256>,
pub pending_inflows: U256,
pub pending_outflows: U256,
pub last_updated: SystemTime,
}
impl LiquidityTracker {
pub async fn ensure_liquidity(
&self,
chain_id: ChainId,
token: Address,
required_amount: U256,
) -> Result<(), LiquidityError> {
let current = self.get_available_liquidity(chain_id, token).await?;
if current < required_amount {
let deficit = required_amount - current;
self.initiate_rebalance(chain_id, token, deficit).await?;
}
Ok(())
}
async fn initiate_rebalance(
&self,
target_chain: ChainId,
token: Address,
amount: U256,
) -> Result<(), LiquidityError> {
// Find best source chain
let source_chain = self.find_best_source(token, amount).await?;
// Execute bridge operation
let bridge = self.select_bridge(source_chain, target_chain, token)?;
bridge.transfer(source_chain, target_chain, token, amount).await?;
Ok(())
}
}
#[async_trait]
pub trait BridgeIntegration: Send + Sync {
fn name(&self) -> &str;
fn supported_chains(&self) -> &[ChainId];
fn supported_tokens(&self, chain_id: ChainId) -> &[Address];
async fn estimate_cost(
&self,
from_chain: ChainId,
to_chain: ChainId,
token: Address,
amount: U256,
) -> Result<BridgeCost, BridgeError>;
async fn transfer(
&self,
from_chain: ChainId,
to_chain: ChainId,
token: Address,
amount: U256,
) -> Result<BridgeTransfer, BridgeError>;
}
// Example: Binance Bridge Integration
pub struct BinanceBridge {
client: BinanceSpotApi,
config: BinanceBridgeConfig,
}
impl BridgeIntegration for BinanceBridge {
async fn transfer(
&self,
from_chain: ChainId,
to_chain: ChainId,
token: Address,
amount: U256,
) -> Result<BridgeTransfer, BridgeError> {
// 1. Sell token on source chain
let sell_order = self.create_sell_order(token, amount).await?;
// 2. Buy token on destination chain
let buy_order = self.create_buy_order(token, amount, to_chain).await?;
// 3. Execute withdrawal to destination
let withdrawal = self.withdraw_to_chain(to_chain, token, amount).await?;
Ok(BridgeTransfer {
id: withdrawal.id,
estimated_arrival: withdrawal.estimated_time,
cost: sell_order.fee + buy_order.fee + withdrawal.fee,
})
}
}
The relay maintains real-time price feeds for accurate fee calculations:
pub struct PriceOracle {
tx: mpsc::UnboundedSender<PriceOracleMessage>,
constant_rate: Option<f64>,
}
pub enum PriceOracleMessage {
Update {
fetcher: PriceFetcher,
prices: Vec<(CoinPair, f64)>,
timestamp: Instant
},
Lookup {
pair: CoinPair,
tx: oneshot::Sender<Option<f64>>
},
}
impl PriceOracle {
pub async fn eth_price(&self, coin: CoinKind) -> Option<U256> {
if coin.is_eth() {
return Some(U256::from(1e18)); // 1:1 for ETH
}
let (req_tx, req_rx) = oneshot::channel();
self.tx.send(PriceOracleMessage::Lookup {
pair: CoinPair { from: coin, to: CoinKind::ETH },
tx: req_tx,
})?;
req_rx.await
.ok()
.flatten()
.or(self.constant_rate)
.map(|eth_price| U256::from((eth_price * 1e18) as u128))
}
pub fn spawn_fetcher(
&self,
coin_registry: Arc<CoinRegistry>,
fetcher: PriceFetcher,
pairs: &[CoinPair],
) {
match fetcher {
PriceFetcher::CoinGecko => {
CoinGecko::launch(coin_registry, pairs, self.tx.clone())
}
}
}
}
// CoinGecko integration
impl CoinGecko {
pub fn launch(
coin_registry: Arc<CoinRegistry>,
pairs: &[CoinPair],
oracle_tx: mpsc::UnboundedSender<PriceOracleMessage>,
) {
tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(30));
loop {
interval.tick().await;
match Self::fetch_prices(&coin_registry, pairs).await {
Ok(prices) => {
oracle_tx.send(PriceOracleMessage::Update {
fetcher: PriceFetcher::CoinGecko,
prices,
timestamp: Instant::now(),
}).ok();
}
Err(e) => {
tracing::error!("Failed to fetch prices: {}", e);
}
}
}
});
}
}
The relay provides comprehensive observability:
pub struct RpcMetricsService<S> {
service: S,
}
impl<S> RpcServiceT for RpcMetricsService<S>
where S: RpcServiceT<MethodResponse = MethodResponse> + Send + Sync + Clone + 'static
{
fn call<'a>(&self, req: Request<'a>) -> impl Future<Output = Self::MethodResponse> + Send + 'a {
async move {
let method = req.method_name().replace("wallet_", "relay_");
let span = span!(
Level::INFO,
"request",
otel.kind = ?SpanKind::Server,
otel.name = format!("relay/{}", method),
rpc.jsonrpc.version = "2.0",
rpc.system = "jsonrpc",
rpc.jsonrpc.request_id = %req.id(),
rpc.method = method,
);
let timer = Instant::now();
let response = self.service.call(req).instrument(span.clone()).await;
let elapsed = timer.elapsed();
// Record metrics
if let Some(error_code) = response.as_error_code() {
span.record("rpc.jsonrpc.error_code", error_code);
counter!(
"rpc.call.count",
"method" => method.clone(),
"code" => error_code.to_string()
).increment(1);
}
histogram!(
"rpc.call.latency",
"method" => method
).record(elapsed.as_millis() as f64);
response
}
}
}
// Periodic metrics collection
pub async fn spawn_periodic_collectors(
chains: Chains,
storage: RelayStorage,
) {
// Balance metrics
tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(60));
loop {
interval.tick().await;
for chain in chains.chains() {
if let Ok(balance) = chain.provider().get_balance(chain.fee_recipient()).await {
gauge!(
"relay.balance",
"chain" => chain.id().to_string(),
"token" => "native"
).set(balance.to::<f64>());
}
}
}
});
// Transaction metrics
tokio::spawn(async move {
let mut interval = tokio::time::interval(Duration::from_secs(30));
loop {
interval.tick().await;
if let Ok(stats) = storage.get_transaction_stats().await {
gauge!("relay.transactions.pending").set(stats.pending as f64);
gauge!("relay.transactions.confirmed").set(stats.confirmed as f64);
gauge!("relay.transactions.failed").set(stats.failed as f64);
}
}
});
}
# 1. Clone repositories
git clone https://github.com/ithacaxyz/relay.git
git clone https://github.com/ithacaxyz/porto.git
# 2. Setup relay development environment
cd relay
git submodule update --init --recursive
# 3. Install dependencies
cargo install cargo-nextest cargo-llvm-cov
# 4. Setup database
docker run -d \
--name relay-postgres \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=relay \
-p 5432:5432 \
postgres:15
# 5. Run migrations
sqlx migrate run --database-url postgres://postgres:password@localhost/relay
# 6. Start local test environment
anvil --chain-id 31337 --host 0.0.0.0 --port 8545
# 7. Deploy contracts
cd tests/account
forge build
forge script DeployScript --rpc-url http://localhost:8545 --broadcast
# 8. Configure relay
cat > relay.yaml << EOF
server:
address: "127.0.0.1"
port: 8323
chain:
endpoints:
- "http://localhost:8545"
fee_tokens:
- "0x..." # Deployed ERC20 address
quote:
ttl: 300
gas:
intent_buffer: 25000
tx_buffer: 1000000
contracts:
orchestrator: "0x..."
delegation_proxy: "0x..."
simulator: "0x..."
funder: "0x..."
escrow: "0x..."
EOF
# 9. Run relay
cargo run --bin relay
# Run all unit tests
cargo test --lib
# Run with coverage
cargo llvm-cov test --lib --html
# Build contracts first
cd tests/account && forge build
# Run e2e tests
cargo test e2e -- --test-threads=1
# Run specific test case
cargo test e2e::cases::simple::auth_then_erc20_transfer
# Create test accounts and run continuous load
cargo run --bin stress -- \
--relay-url http://localhost:8323 \
--private-key $TEST_PRIVATE_KEY \
--chain-id 31337 \
--fee-token $ERC20_ADDRESS \
--rpc-url http://localhost:8545 \
--accounts 100
# Format code (requires nightly)
cargo +nightly fmt
# Run clippy with strict settings
cargo clippy -- -D warnings
# Check all feature combinations
cargo check --all-features
cargo check --no-default-features
# Security audit
cargo audit
# Dependency analysis
cargo deps --all-deps | dot -Tpng > deps.png
RUST_LOG=debug,relay=trace cargo run --bin relay
-- Check transaction status
SELECT id, chain_id, status, created_at, updated_at
FROM transactions
ORDER BY created_at DESC
LIMIT 10;
-- Check bundle status
SELECT b.id, b.status, COUNT(bt.transaction_id) as tx_count
FROM bundles b
LEFT JOIN bundle_transactions bt ON b.id = bt.bundle_id
GROUP BY b.id, b.status;
-- Check failed transactions
SELECT t.id, t.error, t.retry_count
FROM transactions t
WHERE t.status = 'failed';
# Profile with flamegraph
cargo install flamegraph
sudo flamegraph cargo run --bin relay
# Memory profiling with valgrind
cargo install cargo-valgrind
cargo valgrind run --bin relay
# Multi-stage build for optimal size
FROM rust:1.85-alpine as builder
# Install build dependencies
RUN apk add --no-cache musl-dev postgresql-dev
WORKDIR /app
COPY . .
# Build with optimizations
RUN cargo build --release --bin relay
FROM alpine:3.18
RUN apk add --no-cache ca-certificates postgresql-client
COPY --from=builder /app/target/release/relay /usr/local/bin/
COPY migrations/ /app/migrations/
EXPOSE 8323 8324
CMD ["relay"]
apiVersion: apps/v1
kind: Deployment
metadata:
name: ithaca-relay
spec:
replicas: 3
selector:
matchLabels:
app: ithaca-relay
template:
metadata:
labels:
app: ithaca-relay
spec:
containers:
- name: relay
image: ghcr.io/ithacaxyz/relay:latest
ports:
- containerPort: 8323
name: rpc
- containerPort: 8324
name: metrics
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: relay-secrets
key: database-url
- name: SIGNING_KEY
valueFrom:
secretKeyRef:
name: relay-secrets
key: signing-key
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "2000m"
livenessProbe:
httpGet:
path: /health
port: 8323
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8323
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: relay-service
spec:
selector:
app: ithaca-relay
ports:
- name: rpc
port: 8323
targetPort: 8323
- name: metrics
port: 8324
targetPort: 8324
# production.yaml
server:
address: "0.0.0.0"
port: 8323
cors_origins: ["https://porto.sh", "https://app.ithaca.xyz"]
metrics:
address: "0.0.0.0"
port: 8324
database:
url: "${DATABASE_URL}"
max_connections: 20
min_connections: 5
connect_timeout: 30
idle_timeout: 600
chain:
endpoints:
- "${MAINNET_RPC_URL}"
- "${ARBITRUM_RPC_URL}"
- "${OPTIMISM_RPC_URL}"
fee_tokens:
- "0xA0b86a33E6417c34d4a9b3c0fe3c4b4a3f24F5C9" # USDC
- "0xdAC17F958D2ee523a2206206994597C13D831ec7" # USDT
contracts:
orchestrator: "${ORCHESTRATOR_ADDRESS}"
delegation_proxy: "${DELEGATION_PROXY_ADDRESS}"
simulator: "${SIMULATOR_ADDRESS}"
funder: "${FUNDER_ADDRESS}"
escrow: "${ESCROW_ADDRESS}"
signers:
quote_signer: "${QUOTE_SIGNER_KEY}"
funder_signer: "${FUNDER_SIGNER_KEY}"
quote:
ttl: 300
gas:
intent_buffer: 25000
tx_buffer: 1000000
fee_escalation:
initial_multiplier: 1.1
max_multiplier: 3.0
escalation_interval: 30
escalation_rate: 1.2
liquidity:
rebalance_threshold: 0.1 # 10% of target
min_reserve_ratio: 0.05 # 5% minimum reserve
interop:
settlement_timeout: 600 # 10 minutes
verification_timeout: 300 # 5 minutes
{
"dashboard": {
"title": "Ithaca Relay",
"panels": [
{
"title": "RPC Request Rate",
"type": "graph",
"targets": [
{
"expr": "rate(rpc_call_count[5m])",
"legendFormat": "{{method}}"
}
]
},
{
"title": "Transaction Success Rate",
"type": "stat",
"targets": [
{
"expr": "rate(relay_transactions_confirmed[5m]) / rate(relay_transactions_submitted[5m])"
}
]
},
{
"title": "Cross-Chain Settlement Latency",
"type": "heatmap",
"targets": [
{
"expr": "histogram_quantile(0.95, rate(settlement_duration_seconds_bucket[5m]))"
}
]
}
]
}
}
groups:
- name: relay-alerts
rules:
- alert: RelayHighErrorRate
expr: rate(rpc_call_count{code!="0"}[5m]) > 0.1
for: 2m
labels:
severity: warning
annotations:
summary: "High RPC error rate detected"
- alert: RelayDatabaseConnections
expr: db_connections_active / db_connections_max > 0.8
for: 1m
labels:
severity: critical
annotations:
summary: "Database connection pool near capacity"
- alert: RelayLowLiquidity
expr: relay_liquidity_ratio < 0.05
for: 5m
labels:
severity: warning
annotations:
summary: "Low liquidity detected on {{ $labels.chain }}"
Symptom: AuthError::EoaNotDelegated
Cause: Account hasn't completed EIP-7702 authorization
Solution:
# Check account delegation status
curl -X POST http://localhost:8323 \
-H "Content-Type: application/json" \
-d '{
"method": "wallet_getKeys",
"params": {
"address": "0x...",
"chainId": 1
},
"id": 1
}'
# If no keys returned, account needs delegation
curl -X POST http://localhost:8323 \
-H "Content-Type: application/json" \
-d '{
"method": "wallet_prepareUpgradeAccount",
"params": {
"address": "0x...",
"delegation": "0x...",
"capabilities": {
"authorizeKeys": [...]
}
},
"id": 1
}'
Symptom: RelayError::InsufficientFunds
Cause: Not enough assets across all chains to fulfill request
Solution:
# Check asset distribution
curl -X POST http://localhost:8323 \
-H "Content-Type: application/json" \
-d '{
"method": "wallet_getAssets",
"params": {
"account": "0x...",
"chainFilter": [1, 42161, 10],
"assetTypeFilter": ["native", "erc20"]
},
"id": 1
}'
# Fund accounts on additional chains if needed
Symptom: QuoteError::QuoteExpired
Cause: Too much time elapsed between prepare and send
Solution:
// Reduce time between prepare and send
const { context, digest } = await porto.prepareCalls(params)
// Sign immediately
const signature = await wallet.signMessage(digest)
// Send without delay
await porto.sendPreparedCalls({
context,
signature,
capabilities: {}
})
Symptom: Transactions stuck in pending Cause: Concurrent transactions with same sequence key Solution:
// Use different sequence keys for parallel execution
let nonce1 = (random_sequence_key_1 << 64) | next_nonce;
let nonce2 = (random_sequence_key_2 << 64) | next_nonce;
// Or wait for previous transaction to confirm
let status = relay.get_calls_status(bundle_id).await?;
while !status.is_final() {
tokio::time::sleep(Duration::from_millis(1000)).await;
status = relay.get_calls_status(bundle_id).await?;
}
-- Add indexes for common queries
CREATE INDEX CONCURRENTLY idx_transactions_status_chain
ON transactions (status, chain_id, created_at);
CREATE INDEX CONCURRENTLY idx_bundles_status_updated
ON bundles (status, updated_at);
CREATE INDEX CONCURRENTLY idx_bundle_transactions_bundle_id
ON bundle_transactions (bundle_id);
-- Partition large tables by date
CREATE TABLE transactions_y2024m01 PARTITION OF transactions
FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');
// Use streaming for large result sets
impl RelayStorage {
pub fn stream_transactions(
&self,
filter: TransactionFilter,
) -> impl Stream<Item = Result<RelayTransaction, StorageError>> + '_ {
let query = sqlx::query_as::<_, RelayTransaction>(
"SELECT * FROM transactions WHERE status = $1 ORDER BY created_at"
)
.bind(filter.status)
.fetch(&self.pool);
stream::iter(query).map(|result| result.map_err(StorageError::from))
}
}
database:
max_connections: 50
min_connections: 10
acquire_timeout: 30
idle_timeout: 600
max_lifetime: 3600
- Production: Use AWS KMS or HashiCorp Vault for key storage
- Development: Use local environment variables (never commit)
- Rotation: Implement automated key rotation for long-term operations
// Example: AWS KMS integration
pub struct KmsSigner {
client: aws_sdk_kms::Client,
key_id: String,
}
impl Eip712PayLoadSigner for KmsSigner {
async fn sign_payload_hash(&self, hash: B256) -> Result<Signature, SignerError> {
let response = self.client
.sign()
.key_id(&self.key_id)
.message(Blob::new(hash.as_slice()))
.message_type(MessageType::Digest)
.signing_algorithm(SigningAlgorithmSpec::EcdsaSha256)
.send()
.await?;
// Convert AWS signature format to Ethereum signature
let signature = response.signature().unwrap();
self.parse_aws_signature(signature)
}
}
use governor::{Quota, RateLimiter};
pub struct RateLimitedRelay {
inner: Relay,
limiter: RateLimiter<String, governor::state::InMemoryState, governor::clock::DefaultClock>,
}
impl RateLimitedRelay {
pub fn new(relay: Relay, requests_per_minute: u32) -> Self {
let quota = Quota::per_minute(nonzero!(requests_per_minute));
let limiter = RateLimiter::keyed(quota);
Self { inner: relay, limiter }
}
pub async fn prepare_calls(
&self,
request: PrepareCallsParameters,
client_ip: IpAddr,
) -> Result<PrepareCallsResponse, RelayError> {
// Check rate limit per IP
self.limiter.check_key(&client_ip.to_string())
.map_err(|_| RelayError::RateLimited)?;
self.inner.prepare_calls(request).await
}
}
#[derive(Serialize)]
pub struct HealthStatus {
pub version: String,
pub status: HealthCode,
pub chains: Vec<ChainHealth>,
pub database: DatabaseHealth,
pub uptime: Duration,
}
impl Relay {
pub async fn health_check(&self) -> HealthStatus {
let start_time = SystemTime::now();
// Check chain connectivity
let chain_futures = self.chains().map(|chain| async move {
let block_number = chain.provider().get_block_number().await;
ChainHealth {
chain_id: chain.id(),
status: if block_number.is_ok() {
HealthCode::Healthy
} else {
HealthCode::Unhealthy
},
latest_block: block_number.ok(),
}
});
let chains = try_join_all(chain_futures).await.unwrap_or_default();
// Check database
let db_status = match self.storage.ping().await {
Ok(_) => HealthCode::Healthy,
Err(_) => HealthCode::Unhealthy,
};
HealthStatus {
version: RELAY_SHORT_VERSION.to_string(),
status: if chains.iter().all(|c| c.status == HealthCode::Healthy) && db_status == HealthCode::Healthy {
HealthCode::Healthy
} else {
HealthCode::Degraded
},
chains,
database: DatabaseHealth { status: db_status },
uptime: start_time.elapsed().unwrap_or_default(),
}
}
}
The Ithaca Relay and Porto ecosystem represent a significant advancement in Ethereum account abstraction and cross-chain interoperability. This comprehensive guide provides the foundation for understanding, developing, and deploying within this innovative system.
- Intent-Based Architecture: Simplifies user experience while maintaining full transaction control
- EIP-7702 Integration: Enables advanced programmability without breaking existing tooling
- Cross-Chain Native: Designed for seamless multi-chain operations from the ground up
- Production Ready: Comprehensive monitoring, error handling, and security features
- Developer Friendly: Extensive SDK support and clear integration patterns
- Developers: Start with the Porto SDK and build your first intent-based application
- Relay Operators: Deploy and configure your own relay for specific use cases
- Contributors: Explore the codebase and contribute to the growing ecosystem
The future of account abstraction is here, and it's built on the solid foundation of the Ithaca Relay and Porto ecosystem.