Building a Phone-as-Gateway for Sensor Data

When I started this project, I had a clear goal: stream motion sensor data from a microcontroller to a server in real-time. I could have gone with an LTE-M capable microcontroller or LoRa communication, but I chose a different path - using a smartphone as a gateway. Everyone has a phone, and for streaming human motion data, this approach turned out to be perfect.

Why Phone-as-Gateway?

The decision to use a phone as an intermediary wasn't just about convenience. Here's why it made sense for my use case:

  • Universal availability: Nearly everyone carries a smartphone

  • Rich connectivity: Wi-Fi, cellular data, and Bluetooth all in one device

  • Processing power: Can handle data buffering, compression, and even basic ML

  • User interface: Built-in screen and input methods for configuration

  • Battery management: Established power management systems

Alternative approaches like LTE-M would have required additional cellular contracts and hardware costs. LoRa would have needed gateway infrastructure. The phone-as-gateway approach leveraged existing infrastructure while providing flexibility for future enhancements.

Architecture Overview

The mobile app serves as a bridge between Bluetooth-connected sensors and a remote server. The architecture consists of several key components:

  • Foreground services for continuous operation

  • Bluetooth Low Energy management for sensor connectivity

  • Data batching and buffering for efficient transmission

  • Voice activity detection for intelligent audio streaming

  • Configuration management for sensor settings

  • Reconnection logic for robust connectivity

Foreground Services: The Backbone

The most critical architectural decision was implementing foreground services in native Kotlin. React Native alone couldn't provide the reliability needed for continuous data streaming when the app is backgrounded.

I implemented two separate foreground services:

Background Sensor Service

This service handles the core sensor data pipeline:

class BackgroundSensorService : Service() {
    private val deviceDataBuffers = ConcurrentHashMap<String, MutableList<ByteArray>>()
    private val httpClient by lazy { MyMutualTlsModule.createClient(this) }
    
    private fun checkAndUploadBatchedData() {
        serviceScope.launch {
            val now = System.currentTimeMillis()
            
            for (deviceId in deviceDataBuffers.keys.toList()) {
                val bufferedBatches = deviceDataBuffers[deviceId] ?: continue
                val timeSinceLastUpdate = now - lastServerUpdateTime[deviceId] ?: 0
                
                val shouldSend = when {
                    bufferedBatches.size >= 10 -> true
                    timeSinceLastUpdate > 10000 && bufferedBatches.size >= 3 -> true
                    timeSinceLastUpdate > 15000 && bufferedBatches.size > 0 => true
                    else -> false
                }
                
                if (shouldSend) {
                    uploadBatchedData(deviceId, bufferedBatches)
                }
            }
        }
    }
}

The service runs independently of the React Native lifecycle, ensuring data collection continues even when the app is minimized or the user switches to another app.

Background Audio Service

For voice command processing, I needed a separate service that could handle real-time audio streaming:

class BackgroundAudioService : Service() {
    private lateinit var audioManager: AudioRecordingManager
    private lateinit var connectionManager: WebSocketConnectionManager
    
    private fun startAudioStreaming(url: String) {
        connectionManager.connectForStreaming(url) {
            if (audioManager.startRecording()) {
                startAudioProcessing(false)
            }
        }
    }
}

This service manages WebSocket connections to the server and implements Voice Activity Detection (VAD) to optimize battery usage by only transmitting audio when speech is detected.

Data Batching for Efficiency

One of the biggest challenges was handling high-frequency sensor data efficiently. Sending individual sensor readings would overwhelm both the Bluetooth connection and the server.

I implemented a sophisticated batching system:

// React Native side - routing to background service
const processBinaryData = useCallback(
  (deviceId: string, data: ArrayBuffer) => {
    backgroundSensorService.addBinaryData(deviceId, data);
    
    updateDeviceState(deviceId, {
      lastUpdate: new Date().toLocaleTimeString(),
    });
  },
  [updateDeviceState],
);

The background service collects these binary batches and applies intelligent upload logic:

  • Count-based batching: Upload when 10+ batches accumulate

  • Time-based batching: Upload after 10-15 seconds regardless of count

  • Energy optimization: Prevents excessive network requests

This approach reduced network requests by 80-90% while maintaining data integrity.

Voice Activity Detection (VAD)

For voice command functionality, I integrated WebRTC's VAD engine to optimize battery life:

private fun processAudioStream(webSocket: WebSocket, isEnrolling: Boolean) {
    while (isRecording) {
        val audioData = recordAudioFrame()

        if (isEnrolling || vadEngine.detectVoice(audioData)) {
            // Send audio data
            webSocket.send(ByteString.of(*audioData))
        }
        // Suppress silent frames to save battery
    }
}

The VAD implementation includes:

  • Pre-speech buffering: Captures 200ms before voice detection to prevent word loss

  • Temporal smoothing: Requires consecutive voice frames to reduce false positives

  • Post-speech padding: Continues transmission 300ms after voice ends

  • Configurable sensitivity: LOW/MEDIUM/HIGH thresholds for different environments

This typically achieves 40-70% reduction in audio data transmission while maintaining transcription accuracy.

Configuration Management

I wanted users to be able to configure sensor parameters without reflashing firmware. The app provides a comprehensive settings interface:

const sendBLEConfigCommand = useCallback(
  async (sendCommand: (command: string) => Promise<void>, parameter: string, value: number) => {
    const formattedValue = Number.isInteger(value) 
      ? value.toFixed(1) 
      : value.toFixed(Math.min(2, value.toString().length - value.toString().indexOf('.') - 1));

    const command = `set_${parameter} ${formattedValue}`;
    await sendCommand(command);
  },
  [],
);

Settings are stored locally and automatically synchronized when devices connect. This includes:

  • Motion detection thresholds

  • Sampling rates (active/idle modes)

  • Bluetooth timeouts

  • Power management options

Reconnection Logic and Error Handling

Bluetooth connections can be unreliable, especially with moving sensors. I implemented exponential backoff reconnection:

class BLEReconnection {
    fun attemptReconnect(deviceAddress: String) {
        val currentAttempts = reconnectAttempts[deviceAddress] ?: 0
        val attemptNum = currentAttempts + 1

        // Exponential backoff: 1s, 2s, 4s, 8s, 16s, 30s max
        val delayMs = minOf(
            BLEConstants.BASE_RECONNECT_DELAY_MS * (1L shl (attemptNum - 1)), 
            BLEConstants.MAX_RECONNECT_DELAY_MS
        )

        handler.postDelayed({
            initiateConnection(deviceAddress)
        }, delayMs)
    }
}

The system handles:

  • Automatic reconnection with intelligent backoff

  • Connection state tracking across app lifecycle

  • Graceful degradation when connections fail

  • User notification of connection status

Audio Features: TTS and Notifications

The app includes comprehensive audio feedback systems:

Text-to-Speech Integration

export class TTSService {
  async speak(text: string): Promise<void> {
    if (!this.isInitialized) {
      await this.initialize();
    }
    Tts.stop();
    Tts.speak(text);
  }

  async readAssistantMessage(text: string): Promise<void> {
    if (this.isAutoReadEnabled) {
      await this.speak(text);
    }
  }
}

Smart Notifications

export class NotificationService {
  async handleMessageNotification(isUnknownCommand: boolean = false): Promise<void> {
    if (isUnknownCommand) {
      return; // Don't play sound for unknown commands
    }

    if (this.isSoundNotificationEnabled) {
      await this.playNotificationSound();
    }
  }
}

The audio system supports:

  • Multi-language TTS (German/English)

  • Configurable auto-read for assistant responses

  • Smart notifications that only trigger for recognized commands

  • Headphone detection for appropriate audio routing

Performance Considerations

The architecture prioritizes battery efficiency and reliability:

  1. Foreground services prevent Android from killing critical processes

  2. Wake locks ensure continuous operation when needed

  3. Data batching reduces network overhead by 80-90%

  4. VAD reduces audio transmission by 40-70%

  5. Exponential backoff prevents connection spam during network issues

Lessons Learned

The phone-as-gateway approach proved highly effective for this use case. Key insights:

  • Native services are essential for reliable background operation in Android

  • Data batching is crucial for high-frequency sensor data

  • User experience matters - configuration without reflashing is valuable

  • Robust reconnection logic is necessary for mobile BLE applications

The implementation successfully streams motion data from wearable sensors through smartphones to cloud servers, enabling real-time analysis and machine learning applications. The modular architecture allows for easy extension with additional sensor types or processing algorithms.

While the complexity is higher than direct connectivity approaches, the flexibility and universal compatibility make it worthwhile for applications requiring broad accessibility and rich user interaction.