code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def __GetLineString(this, data): """Turns the given list of bytes into a finished string""" if (data is None): return None string = "" i = 0 while (i < len(data)): # Cast 2 bytes to figure out what to do next value = struct.unpack_from("<H", data, i)[0] if (value == this.__KEY_TERMINATOR): break; i += 2 if (value == this.__KEY_TERMINATOR): return string elif (value == this.__KEY_VARIABLE): string += "[VAR]" elif (value == "\n"): string += "\n" elif (value == "\\"): string += "\\" elif (value == "["): string += "\[" else: string += chr(value) return string
Turns the given list of bytes into a finished string
__GetLineString
python
PokeAPI/pokeapi
Resources/scripts/data/gen8/read_swsh.py
https://github.com/PokeAPI/pokeapi/blob/master/Resources/scripts/data/gen8/read_swsh.py
BSD-3-Clause
def __MakeLabelHash(this, f): """Returns the label name and a FNV1_64 hash""" # Next 8 bytes is the hash of the label name hash = struct.unpack("<Q", f.read(8))[0] # Next 2 bytes is the label"s name length nameLength = struct.unpack("<H", f.read(2))[0] # Read the bytes until 0x0 is found name = this.__ReadUntil(f, 0x0) if (this.HashFNV1_64(name) == hash): return name, hash
Returns the label name and a FNV1_64 hash
__MakeLabelHash
python
PokeAPI/pokeapi
Resources/scripts/data/gen8/read_swsh.py
https://github.com/PokeAPI/pokeapi/blob/master/Resources/scripts/data/gen8/read_swsh.py
BSD-3-Clause
def __ReadUntil(this, f, value): """Reads the given file until it reaches the given value""" string = "" c = f.read(1) end = bytes([value]) while (c != end): # Read one byte at a time to get each character string += c.decode("utf-8") c = f.read(1) return string
Reads the given file until it reaches the given value
__ReadUntil
python
PokeAPI/pokeapi
Resources/scripts/data/gen8/read_swsh.py
https://github.com/PokeAPI/pokeapi/blob/master/Resources/scripts/data/gen8/read_swsh.py
BSD-3-Clause
def call_phone_number(input: str) -> str: """calls a phone number as a bot and returns a transcript of the conversation. the input to this tool is a pipe separated list of a phone number, a prompt, and the first thing the bot should say. The prompt should instruct the bot with what to do on the call and be in the 3rd person, like 'the assistant is performing this task' instead of 'perform this task'. should only use this tool once it has found an adequate phone number to call. for example, `+15555555555|the assistant is explaining the meaning of life|i'm going to tell you the meaning of life` will call +15555555555, say 'i'm going to tell you the meaning of life', and instruct the assistant to tell the human what the meaning of life is. """ phone_number, prompt, initial_message = input.split("|", 2) call = OutboundCall( base_url=os.environ["TELEPHONY_SERVER_BASE_URL"], to_phone=phone_number, from_phone=os.environ["OUTBOUND_CALLER_NUMBER"], config_manager=RedisConfigManager(), agent_config=ChatGPTAgentConfig( prompt_preamble=prompt, initial_message=BaseMessage(text=initial_message), ), logger=logging.Logger("call_phone_number"), ) LOOP.run_until_complete(call.start()) while True: maybe_transcript = get_transcript(call.conversation_id) if maybe_transcript: delete_transcript(call.conversation_id) return maybe_transcript else: time.sleep(1)
calls a phone number as a bot and returns a transcript of the conversation. the input to this tool is a pipe separated list of a phone number, a prompt, and the first thing the bot should say. The prompt should instruct the bot with what to do on the call and be in the 3rd person, like 'the assistant is performing this task' instead of 'perform this task'. should only use this tool once it has found an adequate phone number to call. for example, `+15555555555|the assistant is explaining the meaning of life|i'm going to tell you the meaning of life` will call +15555555555, say 'i'm going to tell you the meaning of life', and instruct the assistant to tell the human what the meaning of life is.
call_phone_number
python
vocodedev/vocode-core
apps/langchain_agent/tools/vocode.py
https://github.com/vocodedev/vocode-core/blob/master/apps/langchain_agent/tools/vocode.py
MIT
async def respond( self, human_input: str, conversation_id: str, is_interrupt: bool = False, ) -> Tuple[Optional[str], bool]: """Generates a response from the SpellerAgent. The response is generated by joining each character in the human input with a space. The second element of the tuple indicates whether the agent should stop (False means it should not stop). Args: human_input (str): The input from the human user. conversation_id (str): The ID of the conversation. is_interrupt (bool): A flag indicating whether the agent was interrupted. Returns: Tuple[Optional[str], bool]: The generated response and a flag indicating whether to stop. """ return "".join(c + " " for c in human_input), False
Generates a response from the SpellerAgent. The response is generated by joining each character in the human input with a space. The second element of the tuple indicates whether the agent should stop (False means it should not stop). Args: human_input (str): The input from the human user. conversation_id (str): The ID of the conversation. is_interrupt (bool): A flag indicating whether the agent was interrupted. Returns: Tuple[Optional[str], bool]: The generated response and a flag indicating whether to stop.
respond
python
vocodedev/vocode-core
apps/telephony_app/speller_agent.py
https://github.com/vocodedev/vocode-core/blob/master/apps/telephony_app/speller_agent.py
MIT
def create_agent(self, agent_config: AgentConfig) -> BaseAgent: """Creates an agent based on the provided agent configuration. Args: agent_config (AgentConfig): The configuration for the agent to be created. Returns: BaseAgent: The created agent. Raises: Exception: If the agent configuration type is not recognized. """ # If the agent configuration type is CHAT_GPT, create a ChatGPTAgent. if isinstance(agent_config, ChatGPTAgentConfig): return ChatGPTAgent(agent_config=agent_config) # If the agent configuration type is agent_speller, create a SpellerAgent. elif isinstance(agent_config, SpellerAgentConfig): return SpellerAgent(agent_config=agent_config) # If the agent configuration type is not recognized, raise an exception. raise Exception("Invalid agent config")
Creates an agent based on the provided agent configuration. Args: agent_config (AgentConfig): The configuration for the agent to be created. Returns: BaseAgent: The created agent. Raises: Exception: If the agent configuration type is not recognized.
create_agent
python
vocodedev/vocode-core
apps/telephony_app/speller_agent.py
https://github.com/vocodedev/vocode-core/blob/master/apps/telephony_app/speller_agent.py
MIT
def get_metrics_data(self): """Reads and returns current metrics from the SDK""" with self._lock: self.collect() metrics_data = self._metrics_data self._metrics_data = None return metrics_data
Reads and returns current metrics from the SDK
get_metrics_data
python
vocodedev/vocode-core
playground/streaming/tracing_utils.py
https://github.com/vocodedev/vocode-core/blob/master/playground/streaming/tracing_utils.py
MIT
def default_env_vars() -> dict[str, str]: """ Defines default environment variables for the test session. This fixture provides a dictionary of default environment variables that are commonly used across tests. It can be overridden in submodule scoped `conftest.py` files or directly in tests. :return: A dictionary of default environment variables. """ return { "ENVIRONMENT": "test", "AZURE_OPENAI_API_BASE_EAST_US": "https://api.openai.com", "AZURE_OPENAI_API_KEY_EAST_US": "test", }
Defines default environment variables for the test session. This fixture provides a dictionary of default environment variables that are commonly used across tests. It can be overridden in submodule scoped `conftest.py` files or directly in tests. :return: A dictionary of default environment variables.
default_env_vars
python
vocodedev/vocode-core
tests/conftest.py
https://github.com/vocodedev/vocode-core/blob/master/tests/conftest.py
MIT
def mock_env( monkeypatch: MonkeyPatch, request: pytest.FixtureRequest, default_env_vars: dict[str, str] ) -> Generator[None, None, None]: """ Temporarily sets environment variables for testing. This fixture allows tests to run with a modified set of environment variables, either using the default set provided by `default_env_vars` or overridden by test-specific parameters. It ensures that changes to environment variables do not leak between tests. :param monkeypatch: The pytest monkeypatch fixture for modifying environment variables. :param request: The pytest FixtureRequest object for accessing test-specific overrides. :param default_env_vars: A dictionary of default environment variables. :yield: None. This is a setup-teardown fixture that cleans up after itself. """ envvars = default_env_vars.copy() if hasattr(request, "param") and isinstance(request.param, dict): envvars.update(request.param) with mock.patch.dict(os.environ, envvars): yield
Temporarily sets environment variables for testing. This fixture allows tests to run with a modified set of environment variables, either using the default set provided by `default_env_vars` or overridden by test-specific parameters. It ensures that changes to environment variables do not leak between tests. :param monkeypatch: The pytest monkeypatch fixture for modifying environment variables. :param request: The pytest FixtureRequest object for accessing test-specific overrides. :param default_env_vars: A dictionary of default environment variables. :yield: None. This is a setup-teardown fixture that cleans up after itself.
mock_env
python
vocodedev/vocode-core
tests/conftest.py
https://github.com/vocodedev/vocode-core/blob/master/tests/conftest.py
MIT
def default_env_vars(default_env_vars: dict[str, str]) -> dict[str, str]: """ Extends the `default_env_vars` fixture specifically for the submodule. This fixture takes the session-scoped `default_env_vars` fixture from the parent conftest.py and extends or overrides it with additional or modified environment variables specific to the submodule. :param default_env_vars: The inherited `default_env_vars` fixture from the parent conftest. :return: A modified dictionary of default environment variables for the submodule. """ submodule_env_vars = default_env_vars.copy() submodule_env_vars.update( { "VONAGE_API_KEY": "test", "VONAGE_API_SECRET": "test", "VONAGE_APPLICATION_ID": "test", "VONAGE_PRIVATE_KEY": """-----BEGIN PRIVATE KEY----- fake_key -----END PRIVATE KEY-----""", "BASE_URL": "test", "CALL_SERVER_BASE_URL": "test2", } ) return submodule_env_vars
Extends the `default_env_vars` fixture specifically for the submodule. This fixture takes the session-scoped `default_env_vars` fixture from the parent conftest.py and extends or overrides it with additional or modified environment variables specific to the submodule. :param default_env_vars: The inherited `default_env_vars` fixture from the parent conftest. :return: A modified dictionary of default environment variables for the submodule.
default_env_vars
python
vocodedev/vocode-core
tests/streaming/action/conftest.py
https://github.com/vocodedev/vocode-core/blob/master/tests/streaming/action/conftest.py
MIT
def action_config() -> dict: """Provides a common action configuration for tests.""" return { "processing_mode": "muted", "name": "name", "description": "A description", "url": "https://example.com", "input_schema": json.dumps(ACTION_INPUT_SCHEMA), "speak_on_send": True, "speak_on_receive": True, "signature_secret": base64.b64encode(os.urandom(32)).decode(), }
Provides a common action configuration for tests.
action_config
python
vocodedev/vocode-core
tests/streaming/action/test_external_actions.py
https://github.com/vocodedev/vocode-core/blob/master/tests/streaming/action/test_external_actions.py
MIT
def execute_action_setup(mocker, action_config) -> ExecuteExternalAction: """Common setup for creating an ExecuteExternalAction instance.""" action = ExecuteExternalAction( action_config=ExecuteExternalActionVocodeActionConfig(**action_config), ) mocked_requester = mocker.AsyncMock() mocked_requester.send_request.return_value = ExternalActionResponse( result={"test": "test"}, agent_message="message!", success=True, ) action.external_actions_requester = mocked_requester return action
Common setup for creating an ExecuteExternalAction instance.
execute_action_setup
python
vocodedev/vocode-core
tests/streaming/action/test_external_actions.py
https://github.com/vocodedev/vocode-core/blob/master/tests/streaming/action/test_external_actions.py
MIT
def default_env_vars(default_env_vars: dict[str, str]) -> dict[str, str]: """ Extends the `default_env_vars` fixture specifically for the submodule. This fixture takes the session-scoped `default_env_vars` fixture from the parent conftest.py and extends or overrides it with additional or modified environment variables specific to the submodule. :param default_env_vars: The inherited `default_env_vars` fixture from the parent conftest. :return: A modified dictionary of default environment variables for the submodule. """ submodule_env_vars = default_env_vars.copy() submodule_env_vars.update( { "VOCODE_PLAYHT_ON_PREM_ADDR": "test", "BASE_URL": "test", "CALL_SERVER_BASE_URL": "test2", } ) return submodule_env_vars
Extends the `default_env_vars` fixture specifically for the submodule. This fixture takes the session-scoped `default_env_vars` fixture from the parent conftest.py and extends or overrides it with additional or modified environment variables specific to the submodule. :param default_env_vars: The inherited `default_env_vars` fixture from the parent conftest. :return: A modified dictionary of default environment variables for the submodule.
default_env_vars
python
vocodedev/vocode-core
tests/streaming/synthesizer/conftest.py
https://github.com/vocodedev/vocode-core/blob/master/tests/streaming/synthesizer/conftest.py
MIT
def _patched_serialize_record(text: str, record: dict) -> str: """ This function takes a text string and a record dictionary as input and returns a serialized string representation of the record. The record dictionary is expected to contain various keys related to logging information such as 'level', 'time', 'elapsed', 'exception', 'extra', 'file', 'function', 'line', 'message', 'module', 'name', 'process', 'thread'. Each key's value is processed and added to a new dictionary 'serializable'. If the 'exception' key in the record is not None, it is further processed to extract 'type', 'value', and 'traceback' information. The 'serializable' dictionary is then converted to a JSON string using json.dumps. The 'default' parameter is set to str to convert any non-serializable types to string. The 'ensure_ascii' parameter is set to False so that the function can output non-ASCII characters as they are. The function finally returns the serialized string with a newline character appended at the end. Args: text (str): A text string. record (dict): A dictionary containing logging information. Returns: str: A serialized string representation of the record dictionary. """ exception = record["exception"] if exception is not None: exception = { "type": None if exception.type is None else exception.type.__name__, "value": exception.value, "traceback": bool(exception.traceback), } serializable = { "severity": record["level"].name, "text": text, "timestamp": record["time"].timestamp(), "elapsed": { "repr": record["elapsed"], "seconds": record["elapsed"].total_seconds(), }, "exception": exception, "ctx": get_serialized_ctx_wrappers(), "extra": record["extra"], "file": {"name": record["file"].name, "path": record["file"].path}, "function": record["function"], "level": { "icon": record["level"].icon, "name": record["level"].name, "no": record["level"].no, }, "line": record["line"], "message": record["message"], "module": record["module"], "name": record["name"], "process": {"id": record["process"].id, "name": record["process"].name}, "thread": {"id": record["thread"].id, "name": record["thread"].name}, "time": {"repr": record["time"], "timestamp": record["time"].timestamp()}, } return json.dumps(serializable, default=str, ensure_ascii=False) + "\n"
This function takes a text string and a record dictionary as input and returns a serialized string representation of the record. The record dictionary is expected to contain various keys related to logging information such as 'level', 'time', 'elapsed', 'exception', 'extra', 'file', 'function', 'line', 'message', 'module', 'name', 'process', 'thread'. Each key's value is processed and added to a new dictionary 'serializable'. If the 'exception' key in the record is not None, it is further processed to extract 'type', 'value', and 'traceback' information. The 'serializable' dictionary is then converted to a JSON string using json.dumps. The 'default' parameter is set to str to convert any non-serializable types to string. The 'ensure_ascii' parameter is set to False so that the function can output non-ASCII characters as they are. The function finally returns the serialized string with a newline character appended at the end. Args: text (str): A text string. record (dict): A dictionary containing logging information. Returns: str: A serialized string representation of the record dictionary.
_patched_serialize_record
python
vocodedev/vocode-core
vocode/logging.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/logging.py
MIT
def emit(self, record: logging.LogRecord) -> None: # pragma: no cover """ Propagates logs to loguru. :param record: record to log. """ try: level: str | int = logger.level(record.levelname).name except ValueError: level = record.levelno # Find caller from where originated the logged message frame, depth = logging.currentframe(), 2 while ( frame.f_code.co_filename == logging.__file__ or frame.f_code.co_filename == __file__ or "sentry_sdk/integrations" in frame.f_code.co_filename ): frame = frame.f_back # type: ignore depth += 1 logger.opt(depth=depth, exception=record.exc_info).log( level, record.getMessage(), )
Propagates logs to loguru. :param record: record to log.
emit
python
vocodedev/vocode-core
vocode/logging.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/logging.py
MIT
def configure_intercepter() -> None: """ Configures the logging system to intercept log messages. This function sets up an InterceptHandler instance as the main handler for the root logger. It sets the logging level to INFO, meaning that all messages with severity INFO and above will be handled. It then iterates over all the loggers in the logging system. If a logger's name starts with "uvicorn.", it removes all handlers from that logger. This is done to prevent uvicorn's default logging configuration from interfering with our custom configuration. Finally, it sets the InterceptHandler instance as the sole handler for the "uvicorn" and "uvicorn.access" loggers. This ensures that all log messages from uvicorn and its access logger are intercepted by our custom handler. """ intercept_handler = InterceptHandler() logging.basicConfig(handlers=[intercept_handler], level=logging.INFO) for logger_name in logging.root.manager.loggerDict: if logger_name.startswith("uvicorn."): logging.getLogger(logger_name).handlers = [] logging.getLogger("uvicorn").handlers = [intercept_handler] logging.getLogger("uvicorn.access").handlers = [intercept_handler]
Configures the logging system to intercept log messages. This function sets up an InterceptHandler instance as the main handler for the root logger. It sets the logging level to INFO, meaning that all messages with severity INFO and above will be handled. It then iterates over all the loggers in the logging system. If a logger's name starts with "uvicorn.", it removes all handlers from that logger. This is done to prevent uvicorn's default logging configuration from interfering with our custom configuration. Finally, it sets the InterceptHandler instance as the sole handler for the "uvicorn" and "uvicorn.access" loggers. This ensures that all log messages from uvicorn and its access logger are intercepted by our custom handler.
configure_intercepter
python
vocodedev/vocode-core
vocode/logging.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/logging.py
MIT
def configure_pretty_logging() -> None: """ Configures the logging system to output pretty logs. This function enables the 'vocode' logger, sets up an intercept handler to capture logs from the standard logging module, removes all existing handlers from the 'loguru' logger, and adds a new handler that outputs to stdout with pretty formatting (colored, not serialized, no backtrace or diagnosis information). """ logger.enable("vocode") configure_intercepter() logger.remove() logger.add( sys.stdout, level=logging.DEBUG, backtrace=False, diagnose=False, serialize=False, colorize=True, )
Configures the logging system to output pretty logs. This function enables the 'vocode' logger, sets up an intercept handler to capture logs from the standard logging module, removes all existing handlers from the 'loguru' logger, and adds a new handler that outputs to stdout with pretty formatting (colored, not serialized, no backtrace or diagnosis information).
configure_pretty_logging
python
vocodedev/vocode-core
vocode/logging.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/logging.py
MIT
def configure_json_logging() -> None: """ Configures the logging system to output logs in JSON format. This function enables the 'vocode' logger, sets up an intercept handler to capture logs from the standard logging module, removes all existing handlers from the 'loguru' logger, and adds a new handler that outputs to stdout with JSON formatting (serialized, no backtrace or diagnosis information). """ logger.enable("vocode") configure_intercepter() logger.remove() logger.add( sys.stdout, format="{message}", level=logging.DEBUG, backtrace=False, diagnose=False, serialize=True, )
Configures the logging system to output logs in JSON format. This function enables the 'vocode' logger, sets up an intercept handler to capture logs from the standard logging module, removes all existing handlers from the 'loguru' logger, and adds a new handler that outputs to stdout with JSON formatting (serialized, no backtrace or diagnosis information).
configure_json_logging
python
vocodedev/vocode-core
vocode/logging.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/logging.py
MIT
async def check_for_idle(self): """Asks if human is still on the line if no activity is detected, and terminates the conversation if not.""" await self.initial_message_tracker.wait() check_human_present_count = 0 check_human_present_threshold = self.agent.get_agent_config().num_check_human_present_times while self.is_active(): if check_human_present_count > 0 and self.is_human_still_there == True: # Reset the counter if the human is still there check_human_present_count = 0 if ( not self.check_for_idle_paused ) and time.time() - self.last_action_timestamp > self.idle_time_threshold: if check_human_present_count >= check_human_present_threshold: # Stop the phone call after some retries to prevent infinitely long call where human is just silent. await self.action_on_idle() self.is_human_still_there = False await self.send_single_message( message=BaseMessage(text=random.choice(CHECK_HUMAN_PRESENT_MESSAGE_CHOICES)), ) check_human_present_count += 1 # wait till the idle time would have passed the threshold if no action occurs await asyncio.sleep(self.idle_time_threshold / 2)
Asks if human is still on the line if no activity is detected, and terminates the conversation if not.
check_for_idle
python
vocodedev/vocode-core
vocode/streaming/streaming_conversation.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/streaming_conversation.py
MIT
async def broadcast_interrupt(self): """Stops all inflight events and cancels all workers that are sending output Returns true if any events were interrupted - which is used as a flag for the agent (is_interrupt) """ async with self.interrupt_lock: num_interrupts = 0 while True: try: interruptible_event = self.interruptible_events.get_nowait() if not interruptible_event.is_interrupted(): if interruptible_event.interrupt(): logger.debug( f"Interrupting event {type(interruptible_event.payload)} {interruptible_event.payload}", ) num_interrupts += 1 except queue.Empty: break self.output_device.interrupt() self.agent.cancel_current_task() self.agent_responses_worker.cancel_current_task() if self.actions_worker: self.actions_worker.cancel_current_task() return num_interrupts > 0
Stops all inflight events and cancels all workers that are sending output Returns true if any events were interrupted - which is used as a flag for the agent (is_interrupt)
broadcast_interrupt
python
vocodedev/vocode-core
vocode/streaming/streaming_conversation.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/streaming_conversation.py
MIT
async def send_speech_to_output( self, message: str, synthesis_result: SynthesisResult, stop_event: threading.Event, seconds_per_chunk: float, transcript_message: Optional[Message] = None, started_event: Optional[threading.Event] = None, ): """ - Sends the speech chunk by chunk to the output device - update the transcript message as chunks come in (transcript_message is always provided for non filler audio utterances) - If the stop_event is set, the output is stopped - Sets started_event when the first chunk is sent Returns the message that was sent up to, and a flag if the message was cut off """ seconds_spoken = 0.0 def create_on_play_callback( chunk_idx: int, processed_event: asyncio.Event, ): def _on_play(): if chunk_idx == 0: if started_event: started_event.set() if first_chunk_span: self._track_first_chunk(first_chunk_span, synthesis_result) nonlocal seconds_spoken self.mark_last_action_timestamp() seconds_spoken += seconds_per_chunk if transcript_message: transcript_message.text = synthesis_result.get_message_up_to(seconds_spoken) processed_event.set() return _on_play def create_on_interrupt_callback( processed_event: asyncio.Event, ): def _on_interrupt(): processed_event.set() return _on_interrupt if self.transcriber.get_transcriber_config().mute_during_speech: logger.debug("Muting transcriber") self.transcriber.mute() logger.debug(f"Start sending speech {message} to output") first_chunk_span = self._maybe_create_first_chunk_span(synthesis_result, message) audio_chunks: List[AudioChunk] = [] processed_events: List[asyncio.Event] = [] interrupted_before_all_chunks_sent = False async for chunk_idx, chunk_result in enumerate_async_iter(synthesis_result.chunk_generator): if stop_event.is_set(): logger.debug("Interrupted before all chunks were sent") interrupted_before_all_chunks_sent = True break processed_event = asyncio.Event() audio_chunk = AudioChunk( data=chunk_result.chunk, ) # register callbacks setattr(audio_chunk, "on_play", create_on_play_callback(chunk_idx, processed_event)) setattr( audio_chunk, "on_interrupt", create_on_interrupt_callback(processed_event), ) # Prevents the case where we send a chunk after the output device has been interrupted async with self.interrupt_lock: self.output_device.consume_nonblocking( InterruptibleEvent( payload=audio_chunk, is_interruptible=True, interruption_event=stop_event, ), ) audio_chunks.append(audio_chunk) processed_events.append(processed_event) logger.debug("Finished sending chunks to the output device") if processed_events: await processed_events[-1].wait() maybe_first_interrupted_audio_chunk = next( ( audio_chunk for audio_chunk in audio_chunks if audio_chunk.state == ChunkState.INTERRUPTED ), None, ) cut_off = ( interrupted_before_all_chunks_sent or maybe_first_interrupted_audio_chunk is not None ) if ( transcript_message and not cut_off ): # if the audio was not cut off, we can set the transcript message to the full message transcript_message.text = synthesis_result.get_message_up_to(None) if self.transcriber.get_transcriber_config().mute_during_speech: logger.debug("Unmuting transcriber") self.transcriber.unmute() if transcript_message: transcript_message.is_final = not cut_off message_sent = transcript_message.text if transcript_message and cut_off else message if synthesis_result.synthesis_total_span: synthesis_result.synthesis_total_span.finish() return message_sent, cut_off
- Sends the speech chunk by chunk to the output device - update the transcript message as chunks come in (transcript_message is always provided for non filler audio utterances) - If the stop_event is set, the output is stopped - Sets started_event when the first chunk is sent Returns the message that was sent up to, and a flag if the message was cut off
send_speech_to_output
python
vocodedev/vocode-core
vocode/streaming/streaming_conversation.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/streaming_conversation.py
MIT
async def _end_of_run_hook(self) -> None: """This method is called at the end of the run method. It is optional but intended to be overridden if needed.""" pass
This method is called at the end of the run method. It is optional but intended to be overridden if needed.
_end_of_run_hook
python
vocodedev/vocode-core
vocode/streaming/action/end_conversation.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/action/end_conversation.py
MIT
def merge_event_logs(event_logs: List[EventLog]) -> List[EventLog]: """Returns a new list of event logs where consecutive bot messages are merged.""" new_event_logs: List[EventLog] = [] idx = 0 while idx < len(event_logs): bot_messages_buffer: List[Message] = [] current_log = event_logs[idx] while isinstance(current_log, Message) and current_log.sender == Sender.BOT: bot_messages_buffer.append(current_log) idx += 1 try: current_log = event_logs[idx] except IndexError: break if bot_messages_buffer: merged_bot_message = deepcopy(bot_messages_buffer[-1]) merged_bot_message.text = " ".join(event_log.text for event_log in bot_messages_buffer) new_event_logs.append(merged_bot_message) else: new_event_logs.append(current_log) idx += 1 return new_event_logs
Returns a new list of event logs where consecutive bot messages are merged.
merge_event_logs
python
vocodedev/vocode-core
vocode/streaming/agent/openai_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/agent/openai_utils.py
MIT
def split_sentences(text: str) -> List[str]: """Splits text into sentences and preserve trailing periods. Merge sentences that are just numbers, as they are part of lists. """ initial_split = text.split(". ") final_split = [] buffer = "" for i, sentence in enumerate(initial_split): is_last = i == len(initial_split) - 1 buffer += sentence if not is_last: buffer += ". " if not re.fullmatch(r"\d+", sentence.strip()): final_split.append(buffer.strip()) buffer = "" if buffer.strip(): final_split.append(buffer.strip()) return [sentence for sentence in final_split if sentence]
Splits text into sentences and preserve trailing periods. Merge sentences that are just numbers, as they are part of lists.
split_sentences
python
vocodedev/vocode-core
vocode/streaming/agent/streaming_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/agent/streaming_utils.py
MIT
def num_tokens_from_messages(messages: List[dict], model: str = "gpt-3.5-turbo-0613"): """Return the number of tokens used by a list of messages.""" tokenizer_info = get_tokenizer_info(model) if tokenizer_info is None: raise NotImplementedError( f"""num_tokens_from_messages() is not implemented for model {model}. See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.""" ) num_tokens = 0 for message in messages: num_tokens += tokenizer_info.tokens_per_message num_tokens += tokens_from_dict( encoding=tokenizer_info.encoding, d=message, tokens_per_name=tokenizer_info.tokens_per_name, ) num_tokens += 3 # every reply is primed with <|start|>assistant<|message|> return num_tokens
Return the number of tokens used by a list of messages.
num_tokens_from_messages
python
vocodedev/vocode-core
vocode/streaming/agent/token_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/agent/token_utils.py
MIT
def tokens_from_dict(encoding: tiktoken.Encoding, d: Dict[str, Any], tokens_per_name: int) -> int: """Return the number of OpenAI tokens in a dict.""" num_tokens: int = 0 for key, value in d.items(): if value is None: continue if isinstance(value, str): num_tokens += len(encoding.encode(value)) if key == "name": num_tokens += tokens_per_name elif isinstance(value, dict): num_tokens += tokens_from_dict( encoding=encoding, d=value, tokens_per_name=tokens_per_name ) return num_tokens
Return the number of OpenAI tokens in a dict.
tokens_from_dict
python
vocodedev/vocode-core
vocode/streaming/agent/token_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/agent/token_utils.py
MIT
def num_tokens_from_functions(functions: List[dict] | None, model="gpt-3.5-turbo-0613") -> int: """Return the number of tokens used by a list of functions.""" if not functions: return 0 try: encoding = tiktoken.encoding_for_model(model) except KeyError: logger.warning("Warning: model not found. Using cl100k_base encoding.") encoding = tiktoken.get_encoding("cl100k_base") function_overhead = 3 + len(encoding.encode(_FUNCTION_OVERHEAD_STR)) return function_overhead + sum( len(encoding.encode(_format_func_into_prompt_str(func=f))) for f in functions )
Return the number of tokens used by a list of functions.
num_tokens_from_functions
python
vocodedev/vocode-core
vocode/streaming/agent/token_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/agent/token_utils.py
MIT
async def initialize_source(self, room: rtc.Room): """Creates the AudioSource that will be used to capture audio frames. Can only be called once the room has set up its track callbcks """ self.room = room source = rtc.AudioSource(self.sampling_rate, NUM_CHANNELS) track = rtc.LocalAudioTrack.create_audio_track("agent-synthesis", source) options = rtc.TrackPublishOptions() options.source = rtc.TrackSource.SOURCE_MICROPHONE await self.room.local_participant.publish_track(track, options) self.track = track self.source = source
Creates the AudioSource that will be used to capture audio frames. Can only be called once the room has set up its track callbcks
initialize_source
python
vocodedev/vocode-core
vocode/streaming/output_device/livekit_output_device.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/output_device/livekit_output_device.py
MIT
async def play(self, chunk: bytes): """Sends an audio chunk to immediate playback""" pass
Sends an audio chunk to immediate playback
play
python
vocodedev/vocode-core
vocode/streaming/output_device/rate_limit_interruptions_output_device.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/output_device/rate_limit_interruptions_output_device.py
MIT
async def listen() -> None: """Listen to the websocket for audio data and stream it.""" first_message = True buffer = bytearray() while True: message = await ws.recv() if "audio" not in message: continue response = ElevenLabsWebsocketResponse.model_validate_json(message) if response.audio: decoded = base64.b64decode(response.audio) seconds = len(decoded) / ( self.sample_width * self.synthesizer_config.sampling_rate ) if self.upsample: decoded = self._resample_chunk( decoded, self.sample_rate, self.upsample, ) seconds = len(decoded) / (self.sample_width * self.sample_rate) if response.alignment: utterance_chunk = "".join(response.alignment.chars) + " " self.current_turn_utterances_by_chunk.append((utterance_chunk, seconds)) # For backchannels, send them all as one chunk (so it can't be interrupted) and reduce the volume # so that in the case of a false endpoint, the backchannel is not too loud. if first_message and backchannelled: buffer.extend(decoded) logger.info("First message was a backchannel, reducing volume.") reduced_amplitude_buffer = self.reduce_chunk_amplitude( buffer, factor=self.synthesizer_config.backchannel_amplitude_factor ) await self.voice_packet_queue.put(reduced_amplitude_buffer) buffer = bytearray() first_message = False else: buffer.extend(decoded) for chunk_idx in range(0, len(buffer) - chunk_size, chunk_size): await self.voice_packet_queue.put( buffer[chunk_idx : chunk_idx + chunk_size] ) buffer = buffer[len(buffer) - (len(buffer) % chunk_size) :] if response.isFinal: await self.voice_packet_queue.put(None) break
Listen to the websocket for audio data and stream it.
listen
python
vocodedev/vocode-core
vocode/streaming/synthesizer/eleven_labs_websocket_synthesizer.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/synthesizer/eleven_labs_websocket_synthesizer.py
MIT
async def create_speech_uncached( self, message: BaseMessage, chunk_size: int, is_first_text_chunk: bool = False, is_sole_text_chunk: bool = False, ): """ Ran when doing utterance parsing. ie: "Hello, my name is foo." """ if not self.websocket_listener: self.websocket_listener = asyncio.create_task( self.establish_websocket_listeners(chunk_size) ) if isinstance(message, BotBackchannel): if not message.text.endswith(" "): message.text += " " await self.text_chunk_queue.put(message) self.total_chars += len(message.text) else: async for text in string_chunker(message.text): await self.text_chunk_queue.put(LLMToken(text=text)) self.total_chars += len(text) return self.get_current_utterance_synthesis_result()
Ran when doing utterance parsing. ie: "Hello, my name is foo."
create_speech_uncached
python
vocodedev/vocode-core
vocode/streaming/synthesizer/eleven_labs_websocket_synthesizer.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/synthesizer/eleven_labs_websocket_synthesizer.py
MIT
async def send_token_to_synthesizer(self, message: LLMToken, chunk_size: int): """ Ran when parsing a single chunk of text. ie: "Hello," """ self.total_chars += len(message.text) if not self.websocket_listener: self.websocket_listener = asyncio.create_task( self.establish_websocket_listeners(chunk_size) ) await self.text_chunk_queue.put(message) return None
Ran when parsing a single chunk of text. ie: "Hello,"
send_token_to_synthesizer
python
vocodedev/vocode-core
vocode/streaming/synthesizer/eleven_labs_websocket_synthesizer.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/synthesizer/eleven_labs_websocket_synthesizer.py
MIT
async def generate_chunks( play_ht_chunk: bytes, cut_leading_silence=False, ) -> AsyncGenerator[bytes, None]: """Yields chunks of size chunk_size from play_ht_chunk and leaves the remainder in buffer. If cut_leading_silence is True, does not yield chunks until it detects voice. """ nonlocal buffer buffer.extend(play_ht_chunk) detected_voice = False for buffer_idx, chunk in self._enumerate_by_chunk_size(buffer, chunk_size): if cut_leading_silence and not detected_voice: if self._contains_voice_experimental(chunk): detected_voice = True yield chunk if detected_voice: logger.debug(f"Cut off {buffer_idx} bytes of leading silence") else: yield chunk buffer = buffer[len(buffer) - (len(buffer) % chunk_size) :]
Yields chunks of size chunk_size from play_ht_chunk and leaves the remainder in buffer. If cut_leading_silence is True, does not yield chunks until it detects voice.
generate_chunks
python
vocodedev/vocode-core
vocode/streaming/synthesizer/play_ht_synthesizer_v2.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/synthesizer/play_ht_synthesizer_v2.py
MIT
async def _cut_out_trailing_silence( trailing_chunk: bytes, ) -> AsyncGenerator[bytes, None]: """Yields chunks of size chunk_size from trailing_chunk until it detects silence.""" for buffer_idx, chunk in self._enumerate_by_chunk_size(trailing_chunk, chunk_size): if not self._contains_voice_experimental(chunk): logger.debug( f"Cutting off {len(trailing_chunk) - buffer_idx} bytes of trailing silence", ) break yield chunk
Yields chunks of size chunk_size from trailing_chunk until it detects silence.
_cut_out_trailing_silence
python
vocodedev/vocode-core
vocode/streaming/synthesizer/play_ht_synthesizer_v2.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/synthesizer/play_ht_synthesizer_v2.py
MIT
def __init__( self, prefix: Optional[str] = None, suffix: Optional[str] = None, ) -> None: """ Initialize a RedisGenericMessageQueue instance. This initializes a Redis client and sets the name of the stream. """ self.redis: Redis = initialize_redis() queue_name_prefix = f"{prefix}_" if prefix else "" queue_name_suffix = f"_{suffix}" if suffix else "" self.queue_name = f"{queue_name_prefix}queue{queue_name_suffix}"
Initialize a RedisGenericMessageQueue instance. This initializes a Redis client and sets the name of the stream.
__init__
python
vocodedev/vocode-core
vocode/streaming/utils/redis.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/utils/redis.py
MIT
async def publish(self, message: dict) -> None: """ Publishes a message to the Redis stream. Args: message (dict): The message to be published. Returns: None """ logger.info(f"[{self.queue_name}] Publishing message: {message}") try: await self.redis.xadd(self.queue_name, message) except Exception as e: logger.exception(f"[{self.queue_name}] Failed to publish message: {message}") raise e
Publishes a message to the Redis stream. Args: message (dict): The message to be published. Returns: None
publish
python
vocodedev/vocode-core
vocode/streaming/utils/redis.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/utils/redis.py
MIT
async def process(self, item): """ Publish results onto output queue. Calls to async function / task should be able to handle asyncio.CancelledError gracefully and not re-raise it """ raise NotImplementedError
Publish results onto output queue. Calls to async function / task should be able to handle asyncio.CancelledError gracefully and not re-raise it
process
python
vocodedev/vocode-core
vocode/streaming/utils/worker.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/utils/worker.py
MIT
def interrupt(self) -> bool: """ Returns True if the event was interruptible and is now interrupted. """ if not self.is_interruptible: return False self.interruption_event.set() return True
Returns True if the event was interruptible and is now interrupted.
interrupt
python
vocodedev/vocode-core
vocode/streaming/utils/worker.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/utils/worker.py
MIT
async def process(self, item: InterruptibleEventType): """ Publish results onto output queue. Calls to async function / task should be able to handle asyncio.CancelledError gracefully: """ raise NotImplementedError
Publish results onto output queue. Calls to async function / task should be able to handle asyncio.CancelledError gracefully:
process
python
vocodedev/vocode-core
vocode/streaming/utils/worker.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/utils/worker.py
MIT
def cancel_current_task(self): """Free up the resources. That's useful so implementors do not have to implement this but: - threads tasks won't be able to be interrupted. Hopefully not too much of a big deal Threads will also get a reference to the interruptible event - asyncio tasks will still have to handle CancelledError and clean up resources """ if ( self.current_task and not self.current_task.done() and self.interruptible_event.is_interruptible ): return self.current_task.cancel() return False
Free up the resources. That's useful so implementors do not have to implement this but: - threads tasks won't be able to be interrupted. Hopefully not too much of a big deal Threads will also get a reference to the interruptible event - asyncio tasks will still have to handle CancelledError and clean up resources
cancel_current_task
python
vocodedev/vocode-core
vocode/streaming/utils/worker.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/utils/worker.py
MIT
async def generate_from_async_iter_with_lookahead( async_iter: AsyncIterator[AsyncIteratorGenericType], lookahead: int, ) -> AsyncGenerator[List[AsyncIteratorGenericType], None]: """Yield sliding window lists of length `lookahead + 1` from an async iterator. If the length of async iterator < lookahead + 1, then it should just yield the whole async iterator as a list. """ assert lookahead > 0 buffer = [] stream_length = 0 while True: try: next_item = await async_iter.__anext__() stream_length += 1 buffer.append(next_item) if len(buffer) == lookahead + 1: yield buffer buffer = buffer[1:] except StopAsyncIteration: if buffer and stream_length <= lookahead: yield buffer return
Yield sliding window lists of length `lookahead + 1` from an async iterator. If the length of async iterator < lookahead + 1, then it should just yield the whole async iterator as a list.
generate_from_async_iter_with_lookahead
python
vocodedev/vocode-core
vocode/streaming/utils/__init__.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/utils/__init__.py
MIT
async def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. ids: Optional list of ids to associate with the texts. namespace: Optional pinecone namespace to add the texts to. Returns: List of ids from adding the texts into the vectorstore. """ # Adapted from: langchain/vectorstores/pinecone.py. Made langchain implementation async. if namespace is None: namespace = "" # Embed and create the documents docs = [] ids = ids or [str(uuid.uuid4()) for _ in texts] for i, text in enumerate(texts): embedding = await self.create_openai_embedding(text) metadata = metadatas[i] if metadatas else {} metadata[self._text_key] = text docs.append({"id": ids[i], "values": embedding, "metadata": metadata}) # upsert to Pinecone async with self.aiohttp_session.post( f"{self.pinecone_url}/vectors/upsert", headers={"Api-Key": self.pinecone_api_key}, json={ "vectors": docs, "namespace": namespace, }, ) as response: response_json = await response.json() if "message" in response_json: logger.error(f"Error upserting vectors: {response_json}") return ids
Run more texts through the embeddings and add to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. ids: Optional list of ids to associate with the texts. namespace: Optional pinecone namespace to add the texts to. Returns: List of ids from adding the texts into the vectorstore.
add_texts
python
vocodedev/vocode-core
vocode/streaming/vector_db/pinecone.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/vector_db/pinecone.py
MIT
async def similarity_search_with_score( self, query: str, filter: Optional[dict] = None, namespace: Optional[str] = None, ) -> List[Tuple[Document, float]]: """Return pinecone documents most similar to query, along with scores. Args: query: Text to look up documents similar to. filter: Dictionary of argument(s) to filter on metadata namespace: Namespace to search in. Default will search in '' namespace. Returns: List of Documents most similar to the query and score for each """ # Adapted from: langchain/vectorstores/pinecone.py. Made langchain implementation async. if namespace is None: namespace = "" query_obj = await self.create_openai_embedding(query) docs = [] async with self.aiohttp_session.post( f"{self.pinecone_url}/query", headers={"Api-Key": self.pinecone_api_key}, json={ "top_k": self.config.top_k, "namespace": namespace, "filter": filter, "vector": query_obj, "includeMetadata": True, }, ) as response: results = await response.json() for res in results["matches"]: metadata = res["metadata"] if self._text_key in metadata: text = metadata.pop(self._text_key) score = res["score"] docs.append((Document(page_content=text, metadata=metadata), score)) else: logger.warning(f"Found document with no `{self._text_key}` key. Skipping.") return docs
Return pinecone documents most similar to query, along with scores. Args: query: Text to look up documents similar to. filter: Dictionary of argument(s) to filter on metadata namespace: Namespace to search in. Default will search in '' namespace. Returns: List of Documents most similar to the query and score for each
similarity_search_with_score
python
vocodedev/vocode-core
vocode/streaming/vector_db/pinecone.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/streaming/vector_db/pinecone.py
MIT
def __init__(self, func: Callable, *args: Tuple, **kwargs: Dict) -> None: """ Constructs all the necessary attributes for the SentryConfiguredContextManager object. Args: func (Callable): The function to be executed. *args (Tuple): The positional arguments to pass to the function. **kwargs (Dict): The keyword arguments to pass to the function. """ self.func = func self.args = args self.kwargs = kwargs self.result: Optional[Any] = None
Constructs all the necessary attributes for the SentryConfiguredContextManager object. Args: func (Callable): The function to be executed. *args (Tuple): The positional arguments to pass to the function. **kwargs (Dict): The keyword arguments to pass to the function.
__init__
python
vocodedev/vocode-core
vocode/utils/sentry_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/utils/sentry_utils.py
MIT
def is_configured(self) -> bool: """ Checks if Sentry is configured. Returns: bool: True if Sentry is configured, False otherwise. """ client = sentry_sdk.Hub.current.client if client is not None and client.options is not None and "dsn" in client.options: return True return False
Checks if Sentry is configured. Returns: bool: True if Sentry is configured, False otherwise.
is_configured
python
vocodedev/vocode-core
vocode/utils/sentry_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/utils/sentry_utils.py
MIT
def __enter__(self) -> Optional[Any]: """ Executes the function if Sentry is configured. Returns: Any: The result of the function execution, or None if Sentry is not configured. """ if self.is_configured: self.result = self.func(*self.args, **self.kwargs) return self.result else: return None
Executes the function if Sentry is configured. Returns: Any: The result of the function execution, or None if Sentry is not configured.
__enter__
python
vocodedev/vocode-core
vocode/utils/sentry_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/utils/sentry_utils.py
MIT
def __call__(self) -> Optional[Any]: """ Executes the function if Sentry is configured, and prints a message if it's not. Returns: Any: The result of the function execution, or None if Sentry is not configured. """ if self.is_configured: return self.func(*self.args, **self.kwargs) else: logger.debug("Sentry is not configured, skipping function execution.") return None
Executes the function if Sentry is configured, and prints a message if it's not. Returns: Any: The result of the function execution, or None if Sentry is not configured.
__call__
python
vocodedev/vocode-core
vocode/utils/sentry_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/utils/sentry_utils.py
MIT
def synthesizer_base_name_if_should_report_to_sentry( synthesizer: "BaseSynthesizer", ) -> Optional[str]: """Returns a synthesizer name if we should report metrics to Sentry for this kind of synthesizer; else returns None. """ return f"synthesizer.{_SYNTHESIZER_NAMES.get(synthesizer.__class__.__qualname__)}"
Returns a synthesizer name if we should report metrics to Sentry for this kind of synthesizer; else returns None.
synthesizer_base_name_if_should_report_to_sentry
python
vocodedev/vocode-core
vocode/utils/sentry_utils.py
https://github.com/vocodedev/vocode-core/blob/master/vocode/utils/sentry_utils.py
MIT
def HandleRequest(req, method, post_data=None): """Sample dynamic HTTP response handler. Parameters ---------- req : BaseHTTPServer.BaseHTTPRequestHandler The BaseHTTPRequestHandler that recevied the request method: str The HTTP method, either 'HEAD', 'GET', 'POST' as of this writing post_data: str The HTTP post data received by calling `rfile.read()` against the BaseHTTPRequestHandler that received the request. """ response = b'Ahoy\r\n' if method == 'GET': req.send_response(200) req.send_header('Content-Length', len(response)) req.end_headers() req.wfile.write(response) elif method == 'POST': req.send_response(200) req.send_header('Content-Length', len(response)) req.end_headers() req.wfile.write(response) elif method == 'HEAD': req.send_response(200) req.end_headers()
Sample dynamic HTTP response handler. Parameters ---------- req : BaseHTTPServer.BaseHTTPRequestHandler The BaseHTTPRequestHandler that recevied the request method: str The HTTP method, either 'HEAD', 'GET', 'POST' as of this writing post_data: str The HTTP post data received by calling `rfile.read()` against the BaseHTTPRequestHandler that received the request.
HandleRequest
python
mandiant/flare-fakenet-ng
fakenet/configs/CustomProviderExample.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/configs/CustomProviderExample.py
Apache-2.0
def HandleTcp(sock): """Handle a TCP buffer. Parameters ---------- sock : socket The connected socket with which to recv and send data """ while True: try: data = None data = sock.recv(1024) except socket.timeout: pass if not data: break resp = input('\nEnter a response for the TCP client: ') sock.sendall(resp.encode())
Handle a TCP buffer. Parameters ---------- sock : socket The connected socket with which to recv and send data
HandleTcp
python
mandiant/flare-fakenet-ng
fakenet/configs/CustomProviderExample.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/configs/CustomProviderExample.py
Apache-2.0
def HandleUdp(sock, data, addr): """Handle a UDP buffer. Parameters ---------- sock : socket The connected socket with which to recv and send data data : str The data received addr : tuple The host and port of the remote peer """ if data: resp = input('\nEnter a response for the UDP client: ') sock.sendto(resp.encode(), addr)
Handle a UDP buffer. Parameters ---------- sock : socket The connected socket with which to recv and send data data : str The data received addr : tuple The host and port of the remote peer
HandleUdp
python
mandiant/flare-fakenet-ng
fakenet/configs/CustomProviderExample.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/configs/CustomProviderExample.py
Apache-2.0
def first_packet_new_session(self): """Is this the first datagram from this conversation? Returns: True if this pair of endpoints hasn't conversed before, else False """ # sessions.get returns (dst_ip, dport, pid, comm, dport0, proto) or # None. We just want dst_ip and dport for comparison. session = self.diverter.sessions.get(self.pkt.sport) if session is None: return True return not ((session.dst_ip, session.dport) == (self.pkt.dst_ip, self.pkt.dport))
Is this the first datagram from this conversation? Returns: True if this pair of endpoints hasn't conversed before, else False
first_packet_new_session
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def _validateBlackWhite(self): """Validate that only a black or a white list of either type (host or process) is configured. Side-effect: Raises ListenerBlackWhiteList if invalid """ msg = None fmt = 'Cannot specify both %s blacklist and whitelist for port %d' if self.proc_wl and self.proc_bl: msg = fmt % ('process', self.port) self.proc_wl = self.proc_bl = None elif self.host_wl and self.host_bl: msg = fmt % ('host', self.port) self.host_wl = self.host_bl = None if msg: raise ListenerBlackWhiteList(msg)
Validate that only a black or a white list of either type (host or process) is configured. Side-effect: Raises ListenerBlackWhiteList if invalid
_validateBlackWhite
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def addListener(self, listener): """Add a ListenerMeta under the corresponding protocol and port.""" proto = listener.proto port = listener.port if not proto in self.protos: self.protos[proto] = {} if port in self.protos[proto]: raise ListenerAlreadyBoundThere( 'Listener already bound to %s port %s' % (proto, port)) self.protos[proto][port] = listener
Add a ListenerMeta under the corresponding protocol and port.
addListener
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def isHidden(self, proto, port): """Is this port associated with a listener that is hidden?""" listener = self.getListenerMeta(proto, port) return listener.hidden if listener else False
Is this port associated with a listener that is hidden?
isHidden
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def getExecuteCmd(self, proto, port): """Get the ExecuteCmd format string specified by the operator. Args: proto: The protocol name port: The port number Returns: The format string if applicable None, otherwise """ listener = self.getListenerMeta(proto, port) if listener: return listener.cmd_template
Get the ExecuteCmd format string specified by the operator. Args: proto: The protocol name port: The port number Returns: The format string if applicable None, otherwise
getExecuteCmd
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def _isWhiteListMiss(self, thing, whitelist): """Check if thing is NOT in whitelist. Args: thing: thing to check whitelist for whitelist: list of entries Returns: True if thing is in whitelist False otherwise, or if there is no whitelist """ if not whitelist: return False return not (thing in whitelist)
Check if thing is NOT in whitelist. Args: thing: thing to check whitelist for whitelist: list of entries Returns: True if thing is in whitelist False otherwise, or if there is no whitelist
_isWhiteListMiss
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def _isBlackListHit(self, thing, blacklist): """Check if thing is in blacklist. Args: thing: thing to check blacklist for blacklist: list of entries Returns: True if thing is in blacklist False otherwise, or if there is no blacklist """ if not blacklist: return False return (thing in blacklist)
Check if thing is in blacklist. Args: thing: thing to check blacklist for blacklist: list of entries Returns: True if thing is in blacklist False otherwise, or if there is no blacklist
_isBlackListHit
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def isProcessWhiteListMiss(self, proto, port, proc): """Check if proc is OUTSIDE the process WHITElist for a port. Args: proto: The protocol name port: The port number proc: The process name Returns: False if no listener on this port Return value of _isWhiteListMiss otherwise """ listener = self.getListenerMeta(proto, port) if not listener: return False return self._isWhiteListMiss(proc, listener.proc_wl)
Check if proc is OUTSIDE the process WHITElist for a port. Args: proto: The protocol name port: The port number proc: The process name Returns: False if no listener on this port Return value of _isWhiteListMiss otherwise
isProcessWhiteListMiss
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def isProcessBlackListHit(self, proto, port, proc): """Check if proc is IN the process BLACKlist for a port. Args: proto: The protocol name port: The port number proc: The process name Returns: False if no listener on this port Return value of _isBlackListHit otherwise """ listener = self.getListenerMeta(proto, port) if not listener: return False return self._isBlackListHit(proc, listener.proc_bl)
Check if proc is IN the process BLACKlist for a port. Args: proto: The protocol name port: The port number proc: The process name Returns: False if no listener on this port Return value of _isBlackListHit otherwise
isProcessBlackListHit
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def isHostWhiteListMiss(self, proto, port, host): """Check if host is OUTSIDE the process WHITElist for a port. Args: proto: The protocol name port: The port number host: The process name Returns: False if no listener on this port Return value of _isWhiteListMiss otherwise """ listener = self.getListenerMeta(proto, port) if not listener: return False return self._isWhiteListMiss(host, listener.host_wl)
Check if host is OUTSIDE the process WHITElist for a port. Args: proto: The protocol name port: The port number host: The process name Returns: False if no listener on this port Return value of _isWhiteListMiss otherwise
isHostWhiteListMiss
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def isHostBlackListHit(self, proto, port, host): """Check if host is IN the process BLACKlist for a port. Args: proto: The protocol name port: The port number host: The process name Returns: False if no listener on this port Return value of _isBlackListHit otherwise """ listener = self.getListenerMeta(proto, port) if not listener: return False return self._isBlackListHit(host, listener.host_bl)
Check if host is IN the process BLACKlist for a port. Args: proto: The protocol name port: The port number host: The process name Returns: False if no listener on this port Return value of _isBlackListHit otherwise
isHostBlackListHit
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def isDistinct(self, prev, bound_ips): """Not quite inequality. Requires list of bound IPs for that IP protocol version and recognizes when a foreign-destined packet was redirected to localhost or to an IP occupied by an adapter local to the system to be able to suppress output of these near-duplicates. """ return ((not prev) or (self.pid != prev.pid) or (self.comm != prev.comm) or (self.port != prev.port) or ((self.ip != prev.ip) and (self.ip not in bound_ips)))
Not quite inequality. Requires list of bound IPs for that IP protocol version and recognizes when a foreign-destined packet was redirected to localhost or to an IP occupied by an adapter local to the system to be able to suppress output of these near-duplicates.
isDistinct
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def __init__(self, diverter_config, listeners_config, ip_addrs, logging_level=logging.INFO): """Initialize the DiverterBase. TODO: Replace the sys.exit() calls from this function with exceptions or some other mechanism appropriate for allowing the user of this class to programmatically detect and handle these cases in their own way. This may entail moving configuration parsing to a method with a return value, or modifying fakenet.py to handle Diverter exceptions. Args: diverter_config: A dict of [Diverter] config section listeners_config: A dict of listener configuration sections ip_addrs: dictionary keyed by integers 4 and 6, with each element being a list and each list member being a str that is an ASCII representation of an IP address that is associated with a local interface on this system. logging_level: Optional integer logging level such as logging.DEBUG Returns: None """ # For fine-grained control of subclass debug output. Does not control # debug output from DiverterBase. To see DiverterBase debug output, # pass logging.DEBUG as the logging_level argument to init_base. self.pdebug_level = 0 self.pdebug_labels = dict() # Override in Windows implementation self.running_on_windows = False self.pid = os.getpid() self.ip_addrs = ip_addrs self.pcap = None self.pcap_filename = '' self.pcap_lock = None self.logger = logging.getLogger('Diverter') self.logger.setLevel(logging_level) # Network Based Indicators self.nbis = {} # Index remote Process IDs for MultiHost operations self.remote_pid_counter = 0 # Maps Proxy initiated source ports to original source ports self.proxy_sport_to_orig_sport_map = {} # Maps (proxy_sport, orig_sport) to pkt SSL encryption self.is_proxied_pkt_ssl_encrypted = {} # Rate limiting for displaying pid/comm/proto/IP/port self.last_conn = None portlists = ['BlackListPortsTCP', 'BlackListPortsUDP'] stringlists = ['HostBlackList'] idlists = ['BlackListIDsICMP'] self.configure(diverter_config, portlists, stringlists, idlists) self.listeners_config = dict((k.lower(), v) for k, v in listeners_config.items()) # Local IP address self.external_ip = socket.gethostbyname(socket.gethostname()) self.loopback_ip = socket.gethostbyname('localhost') # Sessions cache # NOTE: A dictionary of source ports mapped to destination address, # port tuples self.sessions = dict() # Manage logging of foreign-destined packets self.nonlocal_ips_already_seen = [] self.log_nonlocal_only_once = True # Port forwarding table, for looking up original unbound service ports # when sending replies to foreign endpoints that have attempted to # communicate with unbound ports. Allows fixing up source ports in # response packets. Similar to the `sessions` member of the Windows # Diverter implementation. self.port_fwd_table = dict() self.port_fwd_table_lock = threading.Lock() # Track conversations that will be ignored so that e.g. an RST response # from a closed port does not erroneously trigger port forwarding and # silence later replies to legitimate clients. self.ignore_table = dict() self.ignore_table_lock = threading.Lock() # IP forwarding table, for looking up original foreign destination IPs # when sending replies to local endpoints that have attempted to # communicate with other machines e.g. via hard-coded C2 IP addresses. self.ip_fwd_table = dict() self.ip_fwd_table_lock = threading.Lock() # Ports bound by FakeNet-NG listeners self.listener_ports = ListenerPorts() # Parse listener configurations self.parse_listeners_config(listeners_config) ####################################################################### # Diverter settings # Default TCP/UDP listeners self.default_listener = dict() # Global TCP/UDP port blacklist self.blacklist_ports = {'TCP': [], 'UDP': []} # Glocal ICMP ID blacklist self.blacklist_ids = {'ICMP': []} # Global process blacklist # TODO: Allow PIDs self.blacklist_processes = [] self.whitelist_processes = [] # Global host blacklist # TODO: Allow domain resolution self.blacklist_hosts = [] # Parse diverter config self.parse_diverter_config() slists = ['DebugLevel', ] self.reconfigure(portlists=[], stringlists=slists) dbg_lvl = 0 if self.is_configured('DebugLevel'): for label in self.getconfigval('DebugLevel'): label = label.upper() if label == 'OFF': dbg_lvl = 0 break if not label in DLABELS_INV: self.logger.warning('No such DebugLevel as %s' % (label)) else: dbg_lvl |= DLABELS_INV[label] self.set_debug_level(dbg_lvl, DLABELS) ####################################################################### # Network verification - Implemented in OS-specific mixin # Check active interfaces if not self.check_active_ethernet_adapters(): self.logger.critical('ERROR: No active ethernet interfaces ' 'detected!') self.logger.critical(' Please enable a network interface.') sys.exit(1) # Check configured ip addresses if not self.check_ipaddresses(): self.logger.critical('ERROR: No interface had IP address ' 'configured!') self.logger.critical(' Please configure an IP address on ' 'network interface.') sys.exit(1) # Check configured gateways gw_ok = self.check_gateways() if not gw_ok: self.logger.warning('WARNING: No gateways configured!') if self.is_set('fixgateway'): gw_ok = self.fix_gateway() if not gw_ok: self.logger.warning('Cannot fix gateway') if not gw_ok: self.logger.warning(' Please configure a default ' + 'gateway or route in order to intercept ' + 'external traffic.') self.logger.warning(' Current interception abilities ' + 'are limited to local traffic.') # Check configured DNS servers dns_ok = self.check_dns_servers() if not dns_ok: self.logger.warning('WARNING: No DNS servers configured!') if self.is_set('fixdns'): dns_ok = self.fix_dns() if not dns_ok: self.logger.warning('Cannot fix DNS') if not dns_ok: self.logger.warning(' Please configure a DNS server ' + 'in order to allow network resolution.') # OS-specific Diverters must initialize e.g. WinDivert, # libnetfilter_queue, pf/alf, etc.
Initialize the DiverterBase. TODO: Replace the sys.exit() calls from this function with exceptions or some other mechanism appropriate for allowing the user of this class to programmatically detect and handle these cases in their own way. This may entail moving configuration parsing to a method with a return value, or modifying fakenet.py to handle Diverter exceptions. Args: diverter_config: A dict of [Diverter] config section listeners_config: A dict of listener configuration sections ip_addrs: dictionary keyed by integers 4 and 6, with each element being a list and each list member being a str that is an ASCII representation of an IP address that is associated with a local interface on this system. logging_level: Optional integer logging level such as logging.DEBUG Returns: None
__init__
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def set_debug_level(self, lvl, labels={}): """Enable debug output if necessary, set the debug output level, and maintain a reference to the dictionary of labels to print when a given logging level is encountered. Args: lvl: An int mask of all debug logging levels labels: A dict of int => str assigning names to each debug level Returns: None """ if lvl: self.logger.setLevel(logging.DEBUG) self.pdebug_level = lvl self.pdebug_labels = labels
Enable debug output if necessary, set the debug output level, and maintain a reference to the dictionary of labels to print when a given logging level is encountered. Args: lvl: An int mask of all debug logging levels labels: A dict of int => str assigning names to each debug level Returns: None
set_debug_level
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def pdebug(self, lvl, s): """Log only the debug trace messages that have been enabled via set_debug_level. Args: lvl: An int indicating the debug level of this message s: The mssage Returns: None """ if self.pdebug_level & lvl: label = self.pdebug_labels.get(lvl) prefix = '[' + label + '] ' if label else '[some component] ' self.logger.debug(prefix + str(s))
Log only the debug trace messages that have been enabled via set_debug_level. Args: lvl: An int indicating the debug level of this message s: The mssage Returns: None
pdebug
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def check_privileged(self): """UNIXy and Windows-oriented check for superuser privileges. Returns: True if superuser, else False """ try: privileged = (os.getuid() == 0) except AttributeError: privileged = (ctypes.windll.shell32.IsUserAnAdmin() != 0) return privileged
UNIXy and Windows-oriented check for superuser privileges. Returns: True if superuser, else False
check_privileged
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def parse_listeners_config(self, listeners_config): """Parse listener config sections. TODO: Replace the sys.exit() calls from this function with exceptions or some other mechanism appropriate for allowing the user of this class to programmatically detect and handle these cases in their own way. This may entail modifying fakenet.py. Args: listeners_config: A dict of listener configuration sections Returns: None """ ####################################################################### # Populate diverter ports and process filters from the configuration for listener_name, listener_config in listeners_config.items(): if 'port' in listener_config: port = int(listener_config['port']) hidden = (listener_config.get('hidden', 'false').lower() == 'true') if not 'protocol' in listener_config: self.logger.error('ERROR: Protocol not defined for ' + 'listener %s', listener_name) sys.exit(1) protocol = listener_config['protocol'].upper() if not protocol in ['TCP', 'UDP']: self.logger.error('ERROR: Invalid protocol %s for ' + 'listener %s', protocol, listener_name) sys.exit(1) listener = ListenerMeta(protocol, port, hidden) ############################################################### # Process filtering configuration if ('processwhitelist' in listener_config and 'processblacklist' in listener_config): self.logger.error('ERROR: Listener can\'t have both ' + 'process whitelist and blacklist.') sys.exit(1) elif 'processwhitelist' in listener_config: self.logger.debug('Process whitelist:') whitelist = listener_config['processwhitelist'] listener.setProcessWhitelist(whitelist) # for port in self.port_process_whitelist[protocol]: # self.logger.debug(' Port: %d (%s) Processes: %s', # port, protocol, ', '.join( # self.port_process_whitelist[protocol][port])) elif 'processblacklist' in listener_config: self.logger.debug('Process blacklist:') blacklist = listener_config['processblacklist'] listener.setProcessBlacklist(blacklist) # for port in self.port_process_blacklist[protocol]: # self.logger.debug(' Port: %d (%s) Processes: %s', # port, protocol, ', '.join( # self.port_process_blacklist[protocol][port])) ############################################################### # Host filtering configuration if ('hostwhitelist' in listener_config and 'hostblacklist' in listener_config): self.logger.error('ERROR: Listener can\'t have both ' + 'host whitelist and blacklist.') sys.exit(1) elif 'hostwhitelist' in listener_config: self.logger.debug('Host whitelist:') host_whitelist = listener_config['hostwhitelist'] listener.setHostWhitelist(host_whitelist) # for port in self.port_host_whitelist[protocol]: # self.logger.debug(' Port: %d (%s) Hosts: %s', port, # protocol, ', '.join( # self.port_host_whitelist[protocol][port])) elif 'hostblacklist' in listener_config: self.logger.debug('Host blacklist:') host_blacklist = listener_config['hostblacklist'] listener.setHostBlacklist(host_blacklist) # for port in self.port_host_blacklist[protocol]: # self.logger.debug(' Port: %d (%s) Hosts: %s', port, # protocol, ', '.join( # self.port_host_blacklist[protocol][port])) # Listener metadata is now configured, add it to the dictionary self.listener_ports.addListener(listener) ############################################################### # Execute command configuration if 'executecmd' in listener_config: template = listener_config['executecmd'].strip() # Would prefer not to get into the middle of a debug # session and learn that a typo has ruined the day, so we # test beforehand to make sure all the user-specified # insertion strings are valid. test = self._build_cmd(template, 0, 'test', '1.2.3.4', 12345, '4.3.2.1', port) if not test: self.logger.error(('Terminating due to incorrectly ' + 'configured ExecuteCmd for ' + 'listener %s') % (listener_name)) sys.exit(1) listener.setExecuteCmd(template) self.logger.debug('Port %d (%s) ExecuteCmd: %s', port, protocol, template)
Parse listener config sections. TODO: Replace the sys.exit() calls from this function with exceptions or some other mechanism appropriate for allowing the user of this class to programmatically detect and handle these cases in their own way. This may entail modifying fakenet.py. Args: listeners_config: A dict of listener configuration sections Returns: None
parse_listeners_config
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def build_cmd(self, pkt, pid, comm): """Retrieve the ExecuteCmd directive if applicable and build the command to execute. Args: pkt: An fnpacket.PacketCtx or derived object pid: Process ID associated with the packet comm: Process name (command) that sent the packet Returns: A str that is the resultant command to execute """ cmd = None template = self.listener_ports.getExecuteCmd(pkt.proto, pkt.dport) if template: cmd = self._build_cmd(template, pid, comm, pkt.src_ip, pkt.sport, pkt.dst_ip, pkt.dport) return cmd
Retrieve the ExecuteCmd directive if applicable and build the command to execute. Args: pkt: An fnpacket.PacketCtx or derived object pid: Process ID associated with the packet comm: Process name (command) that sent the packet Returns: A str that is the resultant command to execute
build_cmd
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def _build_cmd(self, tmpl, pid, comm, src_ip, sport, dst_ip, dport): """Build a command based on the template specified in an ExecuteCmd config directive, applying the parameters as needed. Accepts individual arguments instead of an fnpacket.PacketCtx so that the Diverter can test any ExecuteCmd directives at configuration time without having to synthesize a fnpacket.PacketCtx or construct a NamedTuple to satisfy the requirement for such an argument. Args: tmpl: A str containing the body of the ExecuteCmd config directive pid: Process ID associated with the packet comm: Process name (command) that sent the packet src_ip: The source IP address that originated the packet sport: The source port that originated the packet dst_ip: The destination IP that the packet was directed at dport: The destination port that the packet was directed at Returns: A str that is the resultant command to execute """ cmd = None try: cmd = tmpl.format( pid=str(pid), procname=str(comm), src_addr=str(src_ip), src_port=str(sport), dst_addr=str(dst_ip), dst_port=str(dport)) except KeyError as e: self.logger.error(('Failed to build ExecuteCmd for port %d due ' + 'to erroneous format key: %s') % (dport, str(e))) return cmd
Build a command based on the template specified in an ExecuteCmd config directive, applying the parameters as needed. Accepts individual arguments instead of an fnpacket.PacketCtx so that the Diverter can test any ExecuteCmd directives at configuration time without having to synthesize a fnpacket.PacketCtx or construct a NamedTuple to satisfy the requirement for such an argument. Args: tmpl: A str containing the body of the ExecuteCmd config directive pid: Process ID associated with the packet comm: Process name (command) that sent the packet src_ip: The source IP address that originated the packet sport: The source port that originated the packet dst_ip: The destination IP that the packet was directed at dport: The destination port that the packet was directed at Returns: A str that is the resultant command to execute
_build_cmd
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def execute_detached(self, execute_cmd): """OS-agnostic asynchronous subprocess creation. Executes the process with the appropriate subprocess.Popen parameters for UNIXy or Windows platforms to isolate the process from FakeNet-NG to prevent it from being interrupted by termination of FakeNet-NG, Ctrl-C, etc. Args: execute_cmd: A str that is the command to execute Side-effects: Creates the specified process. Returns: Success => an int that is the pid of the new process Failure => None """ DETACHED_PROCESS = 0x00000008 cflags = DETACHED_PROCESS if self.running_on_windows else 0 cfds = False if self.running_on_windows else True shl = False if self.running_on_windows else True def ign_sigint(): # Prevent KeyboardInterrupt in FakeNet-NG's console from # terminating child processes signal.signal(signal.SIGINT, signal.SIG_IGN) preexec = None if self.running_on_windows else ign_sigint try: pid = subprocess.Popen(execute_cmd, creationflags=cflags, shell=shl, close_fds=cfds, preexec_fn=preexec).pid except Exception as e: self.logger.error('Exception of type %s' % (str(type(e)))) self.logger.error('Error: Failed to execute command: %s', execute_cmd) self.logger.error(' %s', e) else: return pid
OS-agnostic asynchronous subprocess creation. Executes the process with the appropriate subprocess.Popen parameters for UNIXy or Windows platforms to isolate the process from FakeNet-NG to prevent it from being interrupted by termination of FakeNet-NG, Ctrl-C, etc. Args: execute_cmd: A str that is the command to execute Side-effects: Creates the specified process. Returns: Success => an int that is the pid of the new process Failure => None
execute_detached
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def parse_diverter_config(self): """Parse [Diverter] config section. Args: N/A Side-effects: Diverter members (whitelists, pcap, etc.) initialized. Returns: None """ # SingleHost vs MultiHost mode self.network_mode = 'SingleHost' # Default self.single_host_mode = True if self.is_configured('networkmode'): self.network_mode = self.getconfigval('networkmode') available_modes = ['singlehost', 'multihost'] # Constrain argument values if self.network_mode.lower() not in available_modes: self.logger.error('NetworkMode must be one of %s' % (available_modes)) sys.exit(1) # Adjust previously assumed mode if operator specifies MultiHost if self.network_mode.lower() == 'multihost': self.single_host_mode = False if (self.getconfigval('processwhitelist') and self.getconfigval('processblacklist')): self.logger.error('ERROR: Diverter can\'t have both process ' + 'whitelist and blacklist.') sys.exit(1) if self.is_set('dumppackets'): self.pcap_filename = '%s_%s.pcap' % (self.getconfigval( 'dumppacketsfileprefix', 'packets'), time.strftime('%Y%m%d_%H%M%S')) self.logger.info('Capturing traffic to %s', self.pcap_filename) self.pcap = dpkt.pcap.Writer(open(self.pcap_filename, 'wb'), linktype=dpkt.pcap.DLT_RAW) self.pcap_lock = threading.Lock() # Do not redirect blacklisted processes if self.is_configured('processblacklist'): self.blacklist_processes = [process.strip() for process in self.getconfigval('processblacklist').split(',')] self.logger.debug('Blacklisted processes: %s', ', '.join( [str(p) for p in self.blacklist_processes])) if self.logger.level == logging.INFO: self.logger.info('Hiding logs from blacklisted processes') # Only redirect whitelisted processes if self.is_configured('processwhitelist'): self.whitelist_processes = [process.strip() for process in self.getconfigval('processwhitelist').split(',')] self.logger.debug('Whitelisted processes: %s', ', '.join( [str(p) for p in self.whitelist_processes])) # Do not redirect blacklisted hosts if self.is_configured('hostblacklist'): self.blacklist_hosts = self.getconfigval('hostblacklist') self.logger.debug('Blacklisted hosts: %s', ', '.join( [str(p) for p in self.getconfigval('hostblacklist')])) # Redirect all traffic self.default_listener = {'TCP': None, 'UDP': None} if self.is_set('redirectalltraffic'): if self.is_unconfigured('defaulttcplistener'): self.logger.error('ERROR: No default TCP listener specified ' + 'in the configuration.') sys.exit(1) elif self.is_unconfigured('defaultudplistener'): self.logger.error('ERROR: No default UDP listener specified ' + 'in the configuration.') sys.exit(1) elif not (self.getconfigval('defaulttcplistener').lower() in self.listeners_config): self.logger.error('ERROR: No configuration exists for ' + 'default TCP listener %s', self.getconfigval('defaulttcplistener')) sys.exit(1) elif not (self.getconfigval('defaultudplistener').lower() in self.listeners_config): self.logger.error('ERROR: No configuration exists for ' + 'default UDP listener %s', self.getconfigval('defaultudplistener')) sys.exit(1) else: default_listener = self.getconfigval('defaulttcplistener').lower() default_port = self.listeners_config[default_listener]['port'] self.default_listener['TCP'] = int(default_port) self.logger.debug('Using default listener %s on port %d', self.getconfigval('defaulttcplistener').lower(), self.default_listener['TCP']) default_listener = self.getconfigval('defaultudplistener').lower() default_port = self.listeners_config[default_listener]['port'] self.default_listener['UDP'] = int(default_port) self.logger.debug('Using default listener %s on port %d', self.getconfigval('defaultudplistener').lower(), self.default_listener['UDP']) # Re-marshall these into a readily usable form... # Do not redirect blacklisted TCP ports if self.is_configured('blacklistportstcp'): self.blacklist_ports['TCP'] = \ self.getconfigval('blacklistportstcp') self.logger.debug('Blacklisted TCP ports: %s', ', '.join( [str(p) for p in self.getconfigval('BlackListPortsTCP')])) # Do not redirect blacklisted UDP ports if self.is_configured('blacklistportsudp'): self.blacklist_ports['UDP'] = \ self.getconfigval('blacklistportsudp') self.logger.debug('Blacklisted UDP ports: %s', ', '.join( [str(p) for p in self.getconfigval('BlackListPortsUDP')])) # Do not redirect blacklisted ICMP IDs if self.is_configured('blacklistidsicmp'): self.blacklist_ids['ICMP'] = \ self.getconfigval('blacklistidsicmp') self.logger.debug('Blacklisted ICMP IDs: %s', ', '.join( [str(c) for c in self.getconfigval('BlackListIDsICMP')]))
Parse [Diverter] config section. Args: N/A Side-effects: Diverter members (whitelists, pcap, etc.) initialized. Returns: None
parse_diverter_config
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def write_pcap(self, pkt): """Writes a packet to the pcap. Args: pkt: A fnpacket.PacketCtx or derived object Returns: None Side-effects: Calls dpkt.pcap.Writer.writekpt to persist the octets """ if self.pcap and self.pcap_lock: with self.pcap_lock: mangled = 'mangled' if pkt.mangled else 'initial' self.pdebug(DPCAP, 'Writing %s packet %s' % (mangled, pkt.hdrToStr2())) self.pcap.writepkt(pkt.octets)
Writes a packet to the pcap. Args: pkt: A fnpacket.PacketCtx or derived object Returns: None Side-effects: Calls dpkt.pcap.Writer.writekpt to persist the octets
write_pcap
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def formatPkt(self, pkt, pid, comm): """Format a packet analysis log line for DGENPKTV. Args: pkt: A fnpacket.PacketCtx or derived object pid: Process ID associated with the packet comm: Process executable name Returns: A str containing the log line """ logline = '' if pkt.proto == 'UDP': fmt = '| {label} {proto} | {pid:>6} | {comm:<8} | {src:>15}:{sport:<5} | {dst:>15}:{dport:<5} | {length:>5} | {flags:<11} | {seqack:<35} |' logline = fmt.format( label=pkt.label, proto=pkt.proto, pid=str(pid), comm=str(comm), src=pkt.src_ip, sport=pkt.sport, dst=pkt.dst_ip, dport=pkt.dport, length=len(pkt), flags='', seqack='', ) elif pkt.proto == 'TCP': tcp = pkt._hdr.data sa = 'Seq=%d, Ack=%d' % (tcp.seq, tcp.ack) f = [] if (tcp.flags & dpkt.tcp.TH_RST) != 0: f.append('RST') if (tcp.flags & dpkt.tcp.TH_SYN) != 0: f.append('SYN') if (tcp.flags & dpkt.tcp.TH_ACK) != 0: f.append('ACK') if (tcp.flags & dpkt.tcp.TH_FIN) != 0: f.append('FIN') if (tcp.flags & dpkt.tcp.TH_PUSH) != 0: f.append('PSH') fmt = '| {label} {proto} | {pid:>6} | {comm:<8} | {src:>15}:{sport:<5} | {dst:>15}:{dport:<5} | {length:>5} | {flags:<11} | {seqack:<35} |' logline = fmt.format( label=pkt.label, proto=pkt.proto, pid=str(pid), comm=str(comm), src=pkt.src_ip, sport=pkt.sport, dst=pkt.dst_ip, dport=pkt.dport, length=len(pkt), flags=','.join(f), seqack=sa, ) else: fmt = '| {label} {proto} | {pid:>6} | {comm:<8} | {src:>15}:{sport:<5} | {dst:>15}:{dport:<5} | {length:>5} | {flags:<11} | {seqack:<35} |' logline = fmt.format( label=pkt.label, proto='UNK', pid=str(pid), comm=str(comm), src=str(pkt.src_ip), sport=str(pkt.sport), dst=str(pkt.dst_ip), dport=str(pkt.dport), length=len(pkt), flags='', seqack='', ) return logline
Format a packet analysis log line for DGENPKTV. Args: pkt: A fnpacket.PacketCtx or derived object pid: Process ID associated with the packet comm: Process executable name Returns: A str containing the log line
formatPkt
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def check_should_ignore(self, pkt, pid, comm): """Indicate whether a packet should be passed without mangling. Checks whether the packet matches black and whitelists, or whether it signifies an FTP Active Mode connection. Args: pkt: A fnpacket.PacketCtx or derived object pid: Process ID associated with the packet comm: Process executable name Returns: True if the packet should be left alone, else False. """ src_ip = pkt.src_ip0 sport = pkt.sport0 dst_ip = pkt.dst_ip0 dport = pkt.dport0 if not self.is_set('redirectalltraffic'): self.pdebug(DIGN, 'Ignoring %s packet %s' % (pkt.proto, pkt.hdrToStr())) return True # SingleHost mode checks if self.single_host_mode: if comm: # Check process blacklist if comm in self.blacklist_processes: self.pdebug(DIGN, ('Ignoring %s packet from process %s ' + 'in the process blacklist.') % (pkt.proto, comm)) self.pdebug(DIGN, ' %s' % (pkt.hdrToStr())) return True # Check process whitelist elif (len(self.whitelist_processes) and (comm not in self.whitelist_processes)): self.pdebug(DIGN, ('Ignoring %s packet from process %s ' + 'not in the process whitelist.') % (pkt.proto, comm)) self.pdebug(DIGN, ' %s' % (pkt.hdrToStr())) return True # Check per-listener blacklisted process list elif self.listener_ports.isProcessBlackListHit( pkt.proto, dport, comm): self.pdebug(DIGN, ('Ignoring %s request packet from ' + 'process %s in the listener process ' + 'blacklist.') % (pkt.proto, comm)) self.pdebug(DIGN, ' %s' % (pkt.hdrToStr())) return True # Check per-listener whitelisted process list elif self.listener_ports.isProcessWhiteListMiss( pkt.proto, dport, comm): self.pdebug(DIGN, ('Ignoring %s request packet from ' + 'process %s not in the listener process ' + 'whitelist.') % (pkt.proto, comm)) self.pdebug(DIGN, ' %s' % (pkt.hdrToStr())) return True # MultiHost mode checks else: pass # None as of yet # Checks independent of mode # Forwarding blacklisted port if pkt.proto: if set(self.blacklist_ports[pkt.proto]).intersection([sport, dport]): self.pdebug(DIGN, 'Forwarding blacklisted port %s packet:' % (pkt.proto)) self.pdebug(DIGN, ' %s' % (pkt.hdrToStr())) return True # Check host blacklist global_host_blacklist = self.getconfigval('hostblacklist') if global_host_blacklist and dst_ip in global_host_blacklist: self.pdebug(DIGN, ('Ignoring %s packet to %s in the host ' + 'blacklist.') % (str(pkt.proto), dst_ip)) self.pdebug(DIGN, ' %s' % (pkt.hdrToStr())) self.logger.error('IGN: host blacklist match') return True # Check the port host whitelist if self.listener_ports.isHostWhiteListMiss(pkt.proto, dport, dst_ip): self.pdebug(DIGN, ('Ignoring %s request packet to %s not in ' + 'the listener host whitelist.') % (pkt.proto, dst_ip)) self.pdebug(DIGN, ' %s' % (pkt.hdrToStr())) return True # Check the port host blacklist if self.listener_ports.isHostBlackListHit(pkt.proto, dport, dst_ip): self.pdebug(DIGN, ('Ignoring %s request packet to %s in the ' + 'listener host blacklist.') % (pkt.proto, dst_ip)) self.pdebug(DIGN, ' %s' % (pkt.hdrToStr())) return True # Duplicated from diverters/windows.py: # HACK: FTP Passive Mode Handling # Check if a listener is initiating a new connection from a # non-diverted port and add it to blacklist. This is done to handle a # special use-case of FTP ACTIVE mode where FTP server is initiating a # new connection for which the response may be redirected to a default # listener. NOTE: Additional testing can be performed to check if this # is actually a SYN packet if pid == self.pid: if ( ((dst_ip in self.ip_addrs[pkt.ipver]) and (not dst_ip.startswith('127.'))) and ((src_ip in self.ip_addrs[pkt.ipver]) and (not dst_ip.startswith('127.'))) and (not self.listener_ports.intersectsWithPorts(pkt.proto, [sport, dport])) ): self.pdebug(DIGN | DFTP, 'Listener initiated %s connection' % (pkt.proto)) self.pdebug(DIGN | DFTP, ' %s' % (pkt.hdrToStr())) self.pdebug(DIGN | DFTP, ' Blacklisting port %d' % (sport)) self.blacklist_ports[pkt.proto].append(sport) return True return False
Indicate whether a packet should be passed without mangling. Checks whether the packet matches black and whitelists, or whether it signifies an FTP Active Mode connection. Args: pkt: A fnpacket.PacketCtx or derived object pid: Process ID associated with the packet comm: Process executable name Returns: True if the packet should be left alone, else False.
check_should_ignore
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def check_log_icmp(self, crit, pkt): """Log an ICMP packet if the header was parsed as ICMP. Args: crit: A DivertParms object pkt: An fnpacket.PacketCtx or derived object Returns: None """ if (pkt.is_icmp and (not self.running_on_windows or pkt.icmp_id not in self.blacklist_ids["ICMP"])): self.logger.info('ICMP type %d code %d %s' % ( pkt.icmp_type, pkt.icmp_code, pkt.hdrToStr()))
Log an ICMP packet if the header was parsed as ICMP. Args: crit: A DivertParms object pkt: An fnpacket.PacketCtx or derived object Returns: None
check_log_icmp
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def getOriginalDestPort(self, orig_src_ip, orig_src_port, proto): """Return original destination port, or None if it was not redirected. The proxy listener uses this method to obtain and provide port information to listeners in the taste() callback as an extra hint as to whether the traffic may be appropriate for parsing by that listener. Args: orig_src_ip: A str that is the ASCII representation of the peer IP orig_src_port: An int that is the source port of the peer Returns: The original destination port if the packet was redirected None, otherwise """ orig_src_key = fnpacket.PacketCtx.gen_endpoint_key(proto, orig_src_ip, orig_src_port) with self.port_fwd_table_lock: return self.port_fwd_table.get(orig_src_key)
Return original destination port, or None if it was not redirected. The proxy listener uses this method to obtain and provide port information to listeners in the taste() callback as an extra hint as to whether the traffic may be appropriate for parsing by that listener. Args: orig_src_ip: A str that is the ASCII representation of the peer IP orig_src_port: An int that is the source port of the peer Returns: The original destination port if the packet was redirected None, otherwise
getOriginalDestPort
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def maybe_redir_ip(self, crit, pkt, pid, comm): """Conditionally redirect foreign destination IPs to localhost. On Linux, this is used only under SingleHost mode. Args: crit: DivertParms object pkt: fnpacket.PacketCtx or derived object pid: int process ID associated with the packet comm: Process name (command) that sent the packet Side-effects: May mangle the packet by modifying the destination IP to point to a loopback or external interface IP local to the system where FakeNet-NG is running. Returns: None """ if self.check_should_ignore(pkt, pid, comm): return self.pdebug(DIPNAT, 'Condition 1 test') # Condition 1: If the remote IP address is foreign to this system, # then redirect it to a local IP address. if self.single_host_mode and (pkt.dst_ip not in self.ip_addrs[pkt.ipver]): self.pdebug(DIPNAT, 'Condition 1 satisfied') with self.ip_fwd_table_lock: self.ip_fwd_table[pkt.skey] = pkt.dst_ip newdst = self.getNewDestinationIp(pkt.src_ip) self.pdebug(DIPNAT, 'REDIRECTING %s to IP %s' % (pkt.hdrToStr(), newdst)) pkt.dst_ip = newdst else: # Delete any stale entries in the IP forwarding table: If the # local endpoint appears to be reusing a client port that was # formerly used to connect to a foreign host (but not anymore), # then remove the entry. This prevents a packet hook from # faithfully overwriting the source IP on a later packet to # conform to the foreign endpoint's stale connection IP when # the host is reusing the port number to connect to an IP # address that is local to the FakeNet system. with self.ip_fwd_table_lock: if pkt.skey in self.ip_fwd_table: self.pdebug(DIPNAT, ' - DELETING ipfwd key entry: %s' % (pkt.skey)) del self.ip_fwd_table[pkt.skey]
Conditionally redirect foreign destination IPs to localhost. On Linux, this is used only under SingleHost mode. Args: crit: DivertParms object pkt: fnpacket.PacketCtx or derived object pid: int process ID associated with the packet comm: Process name (command) that sent the packet Side-effects: May mangle the packet by modifying the destination IP to point to a loopback or external interface IP local to the system where FakeNet-NG is running. Returns: None
maybe_redir_ip
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def maybe_fixup_srcip(self, crit, pkt, pid, comm): """Conditionally fix up the source IP address if the remote endpoint had their connection IP-forwarded. Check is based on whether the remote endpoint corresponds to a key in the IP forwarding table. Args: crit: DivertParms object pkt: fnpacket.PacketCtx or derived object pid: int process ID associated with the packet comm: Process name (command) that sent the packet Side-effects: May mangle the packet by modifying the source IP to reflect the original destination IP that was overwritten by maybe_redir_ip. Returns: None """ # Condition 4: If the local endpoint (IP/port/proto) combo # corresponds to an endpoint that initiated a conversation with a # foreign endpoint in the past, then fix up the source IP for this # incoming packet with the last destination IP that was requested # by the endpoint. self.pdebug(DIPNAT, "Condition 4 test: was remote endpoint IP fwd'd?") with self.ip_fwd_table_lock: if self.single_host_mode and (pkt.dkey in self.ip_fwd_table): self.pdebug(DIPNAT, 'Condition 4 satisfied') self.pdebug(DIPNAT, ' = FOUND ipfwd key entry: ' + pkt.dkey) new_srcip = self.ip_fwd_table[pkt.dkey] self.pdebug(DIPNAT, 'MASQUERADING %s from IP %s' % (pkt.hdrToStr(), new_srcip)) pkt.src_ip = new_srcip else: self.pdebug(DIPNAT, ' ! NO SUCH ipfwd key entry: ' + pkt.dkey)
Conditionally fix up the source IP address if the remote endpoint had their connection IP-forwarded. Check is based on whether the remote endpoint corresponds to a key in the IP forwarding table. Args: crit: DivertParms object pkt: fnpacket.PacketCtx or derived object pid: int process ID associated with the packet comm: Process name (command) that sent the packet Side-effects: May mangle the packet by modifying the source IP to reflect the original destination IP that was overwritten by maybe_redir_ip. Returns: None
maybe_fixup_srcip
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def maybe_redir_port(self, crit, pkt, pid, comm): """Conditionally send packets to the default listener for this proto. Args: crit: DivertParms object pkt: fnpacket.PacketCtx or derived object pid: int process ID associated with the packet comm: Process name (command) that sent the packet Side-effects: May mangle the packet by modifying the destination port to point to the default listener. Returns: None """ # Pre-condition 1: there must be a default listener for this protocol default = self.default_listener.get(pkt.proto) if not default: return # Pre-condition 2: destination must not be present in port forwarding # table (prevents masqueraded ports responding to unbound ports from # being mistaken as starting a conversation with an unbound port). with self.port_fwd_table_lock: # Uses dkey to cross-reference if pkt.dkey in self.port_fwd_table: return # Proxy-related check: is the dport bound by a listener that is hidden? dport_hidden_listener = crit.dport_hidden_listener # Condition 2: If the packet is destined for an unbound port, then # redirect it to a bound port and save the old destination IP in # the port forwarding table keyed by the source endpoint identity. bound_ports = self.listener_ports.getPortList(pkt.proto) if dport_hidden_listener or self.decide_redir_port(pkt, bound_ports): self.pdebug(DDPFV, 'Condition 2 satisfied: Packet destined for ' 'unbound port or hidden listener') # Post-condition 1: General ignore conditions are not met, or this # is part of a conversation that is already being ignored. # # Placed after the decision to redirect for three reasons: # 1.) We want to ensure that the else condition below has a chance # to check whether to delete a stale port forwarding table # entry. # 2.) Checking these conditions is, on average, more expensive than # checking if the packet would be redirected in the first # place. # 3.) Reporting of packets that are being ignored (i.e. not # redirected), which is integrated into this check, should only # appear when packets would otherwise have been redirected. # Is this conversation already being ignored for DPF purposes? with self.ignore_table_lock: if ((pkt.dkey in self.ignore_table) and (self.ignore_table[pkt.dkey] == pkt.sport)): # This is a reply (e.g. a TCP RST) from the # non-port-forwarded server that the non-port-forwarded # client was trying to talk to. Leave it alone. return if self.check_should_ignore(pkt, pid, comm): with self.ignore_table_lock: self.ignore_table[pkt.skey] = pkt.dport return # Record the foreign endpoint and old destination port in the port # forwarding table self.pdebug(DDPFV, ' + ADDING portfwd key entry: ' + pkt.skey) with self.port_fwd_table_lock: self.port_fwd_table[pkt.skey] = pkt.dport self.pdebug(DDPF, 'Redirecting %s to go to port %d' % (pkt.hdrToStr(), default)) pkt.dport = default else: # Delete any stale entries in the port forwarding table: If the # foreign endpoint appears to be reusing a client port that was # formerly used to connect to an unbound port on this server, # remove the entry. This prevents the OUTPUT or other packet # hook from faithfully overwriting the source port to conform # to the foreign endpoint's stale connection port when the # foreign host is reusing the port number to connect to an # already-bound port on the FakeNet system. self.delete_stale_port_fwd_key(pkt.skey) if crit.first_packet_new_session: self.addSession(pkt) # Execute command if applicable self.maybeExecuteCmd(pkt, pid, comm)
Conditionally send packets to the default listener for this proto. Args: crit: DivertParms object pkt: fnpacket.PacketCtx or derived object pid: int process ID associated with the packet comm: Process name (command) that sent the packet Side-effects: May mangle the packet by modifying the destination port to point to the default listener. Returns: None
maybe_redir_port
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def maybe_fixup_sport(self, crit, pkt, pid, comm): """Conditionally fix up source port if the remote endpoint had their connection port-forwarded to the default listener. Check is based on whether the remote endpoint corresponds to a key in the port forwarding table. Side-effects: May mangle the packet by modifying the source port to masquerade traffic coming from the default listener to look as if it is coming from the port that the client originally requested. Returns: None """ hdr_modified = None # Condition 3: If the remote endpoint (IP/port/proto) combo # corresponds to an endpoint that initiated a conversation with an # unbound port in the past, then fix up the source port for this # outgoing packet with the last destination port that was requested # by that endpoint. The term "endpoint" is (ab)used loosely here to # apply to UDP host/port/proto combos and any other protocol that # may be supported in the future. new_sport = None self.pdebug(DDPFV, "Condition 3 test: was remote endpoint port fwd'd?") with self.port_fwd_table_lock: new_sport = self.port_fwd_table.get(pkt.dkey) if new_sport: self.pdebug(DDPFV, 'Condition 3 satisfied: must fix up ' + 'source port') self.pdebug(DDPFV, ' = FOUND portfwd key entry: ' + pkt.dkey) self.pdebug(DDPF, 'MASQUERADING %s to come from port %d' % (pkt.hdrToStr(), new_sport)) pkt.sport = new_sport else: self.pdebug(DDPFV, ' ! NO SUCH portfwd key entry: ' + pkt.dkey) return pkt.hdr if pkt.mangled else None
Conditionally fix up source port if the remote endpoint had their connection port-forwarded to the default listener. Check is based on whether the remote endpoint corresponds to a key in the port forwarding table. Side-effects: May mangle the packet by modifying the source port to masquerade traffic coming from the default listener to look as if it is coming from the port that the client originally requested. Returns: None
maybe_fixup_sport
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def decide_redir_port(self, pkt, bound_ports): """Decide whether to redirect a port. Optimized logic derived by truth table + k-map. See docs/internals.md for details. Args: pkt: fnpacket.PacketCtx or derived object bound_ports: Set of ports that are bound for this protocol Returns: True if the packet must be redirected to the default listener False otherwise """ # A, B, C, D for easy manipulation; full names for readability only. a = src_local = (pkt.src_ip in self.ip_addrs[pkt.ipver]) c = sport_bound = pkt.sport in (bound_ports) d = dport_bound = pkt.dport in (bound_ports) if self.pdebug_level & DDPFV: # Unused logic term not calculated except for debug output b = dst_local = (pkt.dst_ip in self.ip_addrs[pkt.ipver]) self.pdebug(DDPFV, 'src %s (%s)' % (str(pkt.src_ip), ['foreign', 'local'][a])) self.pdebug(DDPFV, 'dst %s (%s)' % (str(pkt.dst_ip), ['foreign', 'local'][b])) self.pdebug(DDPFV, 'sport %s (%sbound)' % (str(pkt.sport), ['un', ''][c])) self.pdebug(DDPFV, 'dport %s (%sbound)' % (str(pkt.dport), ['un', ''][d])) # Convenience function: binary representation of a bool def bn(x): return '1' if x else '0' # Bool -> binary self.pdebug(DDPFV, 'abcd = ' + bn(a) + bn(b) + bn(c) + bn(d)) return (not a and not d) or (not c and not d)
Decide whether to redirect a port. Optimized logic derived by truth table + k-map. See docs/internals.md for details. Args: pkt: fnpacket.PacketCtx or derived object bound_ports: Set of ports that are bound for this protocol Returns: True if the packet must be redirected to the default listener False otherwise
decide_redir_port
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def addSession(self, pkt): """Add a connection to the sessions hash table. Args: pkt: fnpacket.PacketCtx or derived object Returns: None """ session = namedtuple('session', ['dst_ip', 'dport', 'pid', 'comm', 'dport0', 'proto']) pid, comm = self.get_pid_comm(pkt) self.sessions[pkt.sport] = session(pkt.dst_ip, pkt.dport, pid, comm, pkt._dport0, pkt.proto)
Add a connection to the sessions hash table. Args: pkt: fnpacket.PacketCtx or derived object Returns: None
addSession
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def maybeExecuteCmd(self, pkt, pid, comm): """Execute any ExecuteCmd associated with this port/listener. Args: pkt: fnpacket.PacketCtx or derived object pid: int process ID associated with the packet comm: Process name (command) that sent the packet Returns: None """ if not pid: return execCmd = self.build_cmd(pkt, pid, comm) if execCmd: self.logger.info('Executing command: %s' % (execCmd)) self.execute_detached(execCmd)
Execute any ExecuteCmd associated with this port/listener. Args: pkt: fnpacket.PacketCtx or derived object pid: int process ID associated with the packet comm: Process name (command) that sent the packet Returns: None
maybeExecuteCmd
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def mapProxySportToOrigSport(self, proto, orig_sport, proxy_sport, is_ssl_encrypted): """Maps Proxy initiated source ports to their original source ports. The Proxy listener uses this method to notify the diverter about the proxy originated source port for the original source port. It also notifies if the packet uses SSL encryption. Args: proto: str protocol of socket created by ProxyListener orig_sport: int source port that originated the packet proxy_sport: int source port initiated by Proxy listener is_ssl_encrypted: bool is the packet SSL encrypted or not Returns: None """ self.proxy_sport_to_orig_sport_map[(proto, proxy_sport)] = orig_sport self.is_proxied_pkt_ssl_encrypted[(proto, proxy_sport)] = is_ssl_encrypted
Maps Proxy initiated source ports to their original source ports. The Proxy listener uses this method to notify the diverter about the proxy originated source port for the original source port. It also notifies if the packet uses SSL encryption. Args: proto: str protocol of socket created by ProxyListener orig_sport: int source port that originated the packet proxy_sport: int source port initiated by Proxy listener is_ssl_encrypted: bool is the packet SSL encrypted or not Returns: None
mapProxySportToOrigSport
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def logNbi(self, sport, nbi, proto, application_layer_proto, is_ssl_encrypted): """Collects the NBIs from all listeners into a dictionary. All listeners use this method to notify the diverter about any NBI captured within their scope. Args: sport: int port bound by listener nbi: dict NBI captured within the listener proto: str protocol used by the listener application_layer_proto: str Application layer protocol of the pkt is_ssl_encrpted: str is the listener configured to use SSL or not Returns: None """ proxied_nbi = (proto, sport) in self.proxy_sport_to_orig_sport_map # For proxied nbis, override the listener's is_ssl_encrypted with Proxy # listener's is_ssl_encrypted, and update the original sport. For # non-proxied nbis, use listener provided is_ssl_encrypted and sport. if proxied_nbi: orig_sport = self.proxy_sport_to_orig_sport_map[(proto, sport)] is_ssl_encrypted = self.is_proxied_pkt_ssl_encrypted.get((proto, sport)) else: orig_sport = sport if self.sessions.get(orig_sport) is None: return dst_ip, _, pid, comm, orig_dport, transport_layer_proto = self.sessions.get(orig_sport) if application_layer_proto == '': application_layer_proto = transport_layer_proto # Normalize pid and comm for MultiHost mode if pid is None and comm is None and self.network_mode.lower() == 'multihost': self.remote_pid_counter += 1 pid = self.remote_pid_counter comm = 'Remote Process' # Craft the dictionary nbi_entry = { 'transport_layer_proto': transport_layer_proto, 'sport': orig_sport, 'dst_ip': dst_ip, 'dport': orig_dport, 'is_ssl_encrypted': is_ssl_encrypted, 'network_mode': self.network_mode.lower(), 'nbi': nbi } application_layer_proto = application_layer_proto.lower() # If it's a new NBI from an exisitng process or existing protocol, # append the nbi, else create new key self.nbis.setdefault((pid, comm), {}).setdefault(application_layer_proto, []).append(nbi_entry)
Collects the NBIs from all listeners into a dictionary. All listeners use this method to notify the diverter about any NBI captured within their scope. Args: sport: int port bound by listener nbi: dict NBI captured within the listener proto: str protocol used by the listener application_layer_proto: str Application layer protocol of the pkt is_ssl_encrpted: str is the listener configured to use SSL or not Returns: None
logNbi
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def prettyPrintNbi(self): """Convenience method to print all NBIs in appropriate format upon fakenet session termination. Called by stop() method of diverter. """ banner = r""" NNNNNNNN NNNNNNNNBBBBBBBBBBBBBBBBB IIIIIIIIII N:::::::N N::::::NB::::::::::::::::B I::::::::I N::::::::N N::::::NB::::::BBBBBB:::::B I::::::::I N:::::::::N N::::::NBB:::::B B:::::BII::::::II N::::::::::N N::::::N B::::B B:::::B I::::I ssssssssss N:::::::::::N N::::::N B::::B B:::::B I::::I ss::::::::::s N:::::::N::::N N::::::N B::::BBBBBB:::::B I::::I ss:::::::::::::s N::::::N N::::N N::::::N B:::::::::::::BB I::::I s::::::ssss:::::s N::::::N N::::N:::::::N B::::BBBBBB:::::B I::::I s:::::s ssssss N::::::N N:::::::::::N B::::B B:::::B I::::I s::::::s N::::::N N::::::::::N B::::B B:::::B I::::I s::::::s N::::::N N:::::::::N B::::B B:::::B I::::I ssssss s:::::s N::::::N N::::::::NBB:::::BBBBBB::::::BII::::::IIs:::::ssss::::::s N::::::N N:::::::NB:::::::::::::::::B I::::::::Is::::::::::::::s N::::::N N::::::NB::::::::::::::::B I::::::::I s:::::::::::ss NNNNNNNN NNNNNNNBBBBBBBBBBBBBBBBB IIIIIIIIII sssssssssss ======================================================================== Network-Based Indicators Summary ======================================================================== """ indent = " " self.logger.info(banner) process_counter = 0 for process_info, values in self.nbis.items(): process_counter += 1 self.logger.info(f"[{process_counter}] Process ID: " f"{process_info[0]}, Process Name: {process_info[1]}") for application_layer_proto, nbi_entry in values.items(): self.logger.info(f"{indent*2} Protocol: " f"{application_layer_proto}") nbi_counter = 0 for attributes in nbi_entry: nbi_counter += 1 self.logger.info(f"{indent*3}{nbi_counter}.Transport Layer " f"Protocol: {attributes['transport_layer_proto']}") self.logger.info(f"{indent*4}Source port: {attributes['sport']}") self.logger.info(f"{indent*4}Destination IP: {attributes['dst_ip']}") self.logger.info(f"{indent*4}Destination port: {attributes['dport']}") self.logger.info(f"{indent*4}SSL encrypted: " f"{attributes['is_ssl_encrypted']}") self.logger.info(f"{indent*4}Network mode: " f"{attributes['network_mode']}") for key, v in attributes['nbi'].items(): if v is not None: # Let's convert the NBI value to str if it's not already if isinstance(v, bytes): v = v.decode('utf-8') # Let's print maximum 40 characters for NBI values v = (v[:40]+"...") if len(v)>40 else v self.logger.info(f"{indent*6}-{key}: {v}") self.logger.info("\r") self.logger.info("\r")
Convenience method to print all NBIs in appropriate format upon fakenet session termination. Called by stop() method of diverter.
prettyPrintNbi
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def generate_html_report(self): """Generates an interactive HTML report containing NBI summary saved to the main working directory of flare-fakenet-ng. Called by stop() method of diverter. """ if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'): # Inside a Pyinstaller bundle fakenet_dir_path = os.path.dirname(sys.executable) else: fakenet_dir_path = os.fspath(Path(__file__).parents[1]) template_file = os.path.join(fakenet_dir_path, "configs", "html_report_template.html") template_loader = jinja2.FileSystemLoader(searchpath=os.path.dirname(template_file)) template_env = jinja2.Environment(loader=template_loader) template = template_env.get_template(os.path.basename(template_file)) timestamp = time.strftime('%Y%m%d_%H%M%S') output_filename = f"report_{timestamp}.html" with open(output_filename, "w") as output_file: output_file.write(template.render(nbis=self.nbis)) self.logger.info(f"Generated new HTML report: {output_filename}")
Generates an interactive HTML report containing NBI summary saved to the main working directory of flare-fakenet-ng. Called by stop() method of diverter.
generate_html_report
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def isProcessBlackListed(self, proto, sport=None, process_name=None, dport=None): """Checks if a process is blacklisted. Expected arguments are either: - process_name and dport, or - sport """ pid = None if self.single_host_mode and proto is not None: if process_name is None or dport is None: if sport is None: return False, process_name, pid orig_sport = self.proxy_sport_to_orig_sport_map.get((proto, sport), sport) session = self.sessions.get(orig_sport) if session: pid = session.pid process_name = session.comm dport = session.dport0 else: return False, process_name, pid # Check process blacklist if process_name in self.blacklist_processes: self.pdebug(DIGN, ('Ignoring %s packet from process %s ' + 'in the process blacklist.') % (proto, process_name)) return True, process_name, pid # Check per-listener blacklisted process list if self.listener_ports.isProcessBlackListHit( proto, dport, process_name): self.pdebug(DIGN, ('Ignoring %s request packet from ' + 'process %s in the listener process ' + 'blacklist.') % (proto, process_name)) return True, process_name, pid return False, process_name, pid
Checks if a process is blacklisted. Expected arguments are either: - process_name and dport, or - sport
isProcessBlackListed
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def logNbi(self, sport, nbi, proto, application_layer_proto, is_ssl_encrypted): """Delegate the logging of NBIs to the diverter. This method forwards the provided NBI information to the logNbi() method in the underlying diverter object. Called by all listeners to log NBIs. """ self.__diverter.logNbi(sport, nbi, proto, application_layer_proto, is_ssl_encrypted)
Delegate the logging of NBIs to the diverter. This method forwards the provided NBI information to the logNbi() method in the underlying diverter object. Called by all listeners to log NBIs.
logNbi
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def mapProxySportToOrigSport(self, proto, orig_sport, proxy_sport, is_ssl_encrypted): """Delegate the mapping of proxy sport to original sport to the diverter. This method forwards the provided parameters to the mapProxySportToOrigSport() method in the underlying diverter object. Called by ProxyListener to report the mapping between proxy initiated source port and original source port. """ self.__diverter.mapProxySportToOrigSport(proto, orig_sport, proxy_sport, is_ssl_encrypted)
Delegate the mapping of proxy sport to original sport to the diverter. This method forwards the provided parameters to the mapProxySportToOrigSport() method in the underlying diverter object. Called by ProxyListener to report the mapping between proxy initiated source port and original source port.
mapProxySportToOrigSport
python
mandiant/flare-fakenet-ng
fakenet/diverters/diverterbase.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/diverterbase.py
Apache-2.0
def configure(self, config_dict, portlists=[], stringlists=[], idlists=[]): """Parse configuration. Does three things: 1.) Turn dictionary keys to lowercase 2.) Turn string lists into arrays for quicker access 3.) Expand port range specifications """ self._dict = dict((k.lower(), v) for k, v in config_dict.items()) for entry in portlists: portlist = self.getconfigval(entry) if portlist: expanded = self._expand_ports(portlist) self.setconfigval(entry, expanded) for entry in stringlists: stringlist = self.getconfigval(entry) if stringlist: expanded = [s.strip() for s in stringlist.split(',')] self.setconfigval(entry, expanded) for entry in idlists: idlist = self.getconfigval(entry) if idlist: expanded = [int(c) for c in idlist.split(',')] self.setconfigval(entry, expanded)
Parse configuration. Does three things: 1.) Turn dictionary keys to lowercase 2.) Turn string lists into arrays for quicker access 3.) Expand port range specifications
configure
python
mandiant/flare-fakenet-ng
fakenet/diverters/fnconfig.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/fnconfig.py
Apache-2.0
def _parseIp(self): """Parse IP src/dst fields and next-layer fields if recognized.""" if self._is_ip: self._src_ip0 = self._src_ip = socket.inet_ntoa(self._hdr.src) self._dst_ip0 = self._dst_ip = socket.inet_ntoa(self._hdr.dst) self.proto = self.handled_protocols.get(self.proto_num) # If this is a transport protocol we handle... if self.proto: self._tcpudpcsum0 = self._hdr.data.sum self._sport0 = self._sport = self._hdr.data.sport self._dport0 = self._dport = self._hdr.data.dport self.skey = self._genEndpointKey(self._src_ip, self._sport) self.dkey = self._genEndpointKey(self._dst_ip, self._dport)
Parse IP src/dst fields and next-layer fields if recognized.
_parseIp
python
mandiant/flare-fakenet-ng
fakenet/diverters/fnpacket.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/fnpacket.py
Apache-2.0
def _calcCsums(self): """The roundabout dance of inducing dpkt to recalculate checksums...""" self._hdr.sum = 0 self._hdr.data.sum = 0 # This has the side-effect of invoking dpkt.in_cksum() et al: str(self._hdr)
The roundabout dance of inducing dpkt to recalculate checksums...
_calcCsums
python
mandiant/flare-fakenet-ng
fakenet/diverters/fnpacket.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/fnpacket.py
Apache-2.0
def _iptables_format(self, chain, iface, argfmt): """Format iptables command line with optional interface restriction. Parameters ---------- chain : string One of 'OUTPUT', 'POSTROUTING', 'INPUT', or 'PREROUTING', used for deciding the correct flag (-i versus -o) iface : string or NoneType Name of interface to restrict the rule to (e.g. 'eth0'), or None argfmt : string Format string for remaining iptables arguments. This format string will not be included in format string evaluation but is appended as-is to the iptables command. """ flag_iface = '' if iface: if chain in ['OUTPUT', 'POSTROUTING']: flag_iface = '-o' elif chain in ['INPUT', 'PREROUTING']: flag_iface = '-i' else: raise NotImplementedError('Unanticipated chain %s' % (chain)) self._addcmd = 'iptables -I {chain} {flag_if} {iface} {fmt}' self._addcmd = self._addcmd.format(chain=chain, flag_if=flag_iface, iface=(iface or ''), fmt=argfmt) self._remcmd = 'iptables -D {chain} {flag_if} {iface} {fmt}' self._remcmd = self._remcmd.format(chain=chain, flag_if=flag_iface, iface=(iface or ''), fmt=argfmt)
Format iptables command line with optional interface restriction. Parameters ---------- chain : string One of 'OUTPUT', 'POSTROUTING', 'INPUT', or 'PREROUTING', used for deciding the correct flag (-i versus -o) iface : string or NoneType Name of interface to restrict the rule to (e.g. 'eth0'), or None argfmt : string Format string for remaining iptables arguments. This format string will not be included in format string evaluation but is appended as-is to the iptables command.
_iptables_format
python
mandiant/flare-fakenet-ng
fakenet/diverters/linutil.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linutil.py
Apache-2.0
def start(self, timeout_sec=0.5): """Binds to the netfilter queue number specified in the ctor, obtains the netlink socket, sets a timeout of <timeout_sec>, and starts the thread procedure which checks _stopflag every time the netlink socket times out. """ # Execute iptables to add the rule ret = self._rule.add() if ret != 0: return False self._rule_added = True # Bind the specified callback to the specified queue try: self._nfqueue.bind(self.qno, self._callback) self._bound = True except OSError as e: self.logger.error('Failed to start queue for %s: %s' % (str(self), str(e))) except RuntimeWarning as e: self.logger.error('Failed to start queue for %s: %s' % (str(self), str(e))) if not self._bound: return False # Facilitate _stopflag monitoring and thread joining self._sk = socket.fromfd( self._nfqueue.get_fd(), socket.AF_UNIX, socket.SOCK_STREAM) self._sk.settimeout(timeout_sec) # Start a thread to run the queue and monitor the stop flag self._thread = threading.Thread(target=self._threadproc) self._thread.daemon = True self._stopflag = False try: self._thread.start() self._started = True except RuntimeError as e: self.logger.error('Failed to start queue thread: %s' % (str(e))) return self._started
Binds to the netfilter queue number specified in the ctor, obtains the netlink socket, sets a timeout of <timeout_sec>, and starts the thread procedure which checks _stopflag every time the netlink socket times out.
start
python
mandiant/flare-fakenet-ng
fakenet/diverters/linutil.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linutil.py
Apache-2.0
def parse(self, multi=False, max_col=None): """Rip through the file and call cb to extract field(s). Specify multi if you want to collect an aray instead of exiting the first time the callback returns anything. Only specify max_col if you are uncertain that the maximum column number you will access may exist. For procfs files, this should remain None. """ retval = list() if multi else None try: with open(self.path, 'r') as f: while True: line = f.readline() # EOF case if not len(line): break # Insufficient columns => ValueError if max_col and (len(line) < max_col): raise ValueError(('Line %d in %s has less than %d ' 'columns') % (n, self.path, max_col)) # Skip header lines if self.skip: self.skip -= 1 continue cb_retval = self.cb(line.split()) if cb_retval: if multi: retval.append(cb_retval) else: retval = cb_retval break except IOError as e: self.logger.error('Failed accessing %s: %s' % (path, str(e))) # All or nothing retval = [] if multi else None return retval
Rip through the file and call cb to extract field(s). Specify multi if you want to collect an aray instead of exiting the first time the callback returns anything. Only specify max_col if you are uncertain that the maximum column number you will access may exist. For procfs files, this should remain None.
parse
python
mandiant/flare-fakenet-ng
fakenet/diverters/linutil.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linutil.py
Apache-2.0
def linux_get_current_nfnlq_bindings(self): """Determine what NFQUEUE queue numbers (if any) are already bound by existing libnfqueue client processes. Although iptables rules may exist specifying other queues in addition to these, the netfilter team does not support using libiptc (such as via python-iptables) to detect that condition, so code that does so may break in the future. Shelling out to iptables and parsing its output for NFQUEUE numbers is not an attractive option. The practice of checking the currently bound NetFilter netlink queue bindings is a compromise. Note that if an iptables rule specifies an NFQUEUE number that is not yet bound by any process in the system, the results are undefined. We can add FakeNet arguments to be passed to the Diverter for giving the user more control if it becomes necessary. """ procfs_path = '/proc/net/netfilter/nfnetlink_queue' qnos = list() try: with open(procfs_path, 'r') as f: lines = f.read().split('\n') for line in lines: line = line.strip() if line: queue_nr = int(line.split()[0], 10) self.pdebug(DNFQUEUE, ('Found NFQUEUE #' + str(queue_nr) + ' per ') + procfs_path) qnos.append(queue_nr) except IOError as e: self.logger.debug(('Failed to open %s to enumerate netfilter ' 'netlink queues, caller may proceed as if ' 'none are in use: %s') % (procfs_path, str(e))) return qnos
Determine what NFQUEUE queue numbers (if any) are already bound by existing libnfqueue client processes. Although iptables rules may exist specifying other queues in addition to these, the netfilter team does not support using libiptc (such as via python-iptables) to detect that condition, so code that does so may break in the future. Shelling out to iptables and parsing its output for NFQUEUE numbers is not an attractive option. The practice of checking the currently bound NetFilter netlink queue bindings is a compromise. Note that if an iptables rule specifies an NFQUEUE number that is not yet bound by any process in the system, the results are undefined. We can add FakeNet arguments to be passed to the Diverter for giving the user more control if it becomes necessary.
linux_get_current_nfnlq_bindings
python
mandiant/flare-fakenet-ng
fakenet/diverters/linutil.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linutil.py
Apache-2.0
def linux_iptables_redir_iface(self, iface): """Linux-specific iptables processing for interface-based redirect rules. returns: tuple(bool, list(IptCmdTemplate)) Status of the operation and any successful iptables rules that will need to be undone. """ iptables_rules = [] rule = IptCmdTemplateRedir(iface) ret = rule.add() if ret != 0: self.logger.error('Failed to create PREROUTING/REDIRECT ' + 'rule for %s, stopping...' % (iface)) return (False, iptables_rules) iptables_rules.append(rule) return (True, iptables_rules)
Linux-specific iptables processing for interface-based redirect rules. returns: tuple(bool, list(IptCmdTemplate)) Status of the operation and any successful iptables rules that will need to be undone.
linux_iptables_redir_iface
python
mandiant/flare-fakenet-ng
fakenet/diverters/linutil.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linutil.py
Apache-2.0
def linux_remove_iptables_rules(self, rules): """Execute the iptables command to remove each rule that was successfully added. """ failed = [] for rule in rules: ret = rule.remove() if ret != 0: failed.append(rule) return failed
Execute the iptables command to remove each rule that was successfully added.
linux_remove_iptables_rules
python
mandiant/flare-fakenet-ng
fakenet/diverters/linutil.py
https://github.com/mandiant/flare-fakenet-ng/blob/master/fakenet/diverters/linutil.py
Apache-2.0