python – boto3 put_item succeeds but record does not appear

I haven’t been able to find an answer for this anywhere, hoping SO might be able to finally help.

I’ve got a lambda function that processes a record then writes it to a dynamodb table. By all intents and purposes, it appears that the put_item call is succeeding, however, when I check the dynamo table the record is not appearing in it.

import json
import boto3
import uuid
import urllib3
from botocore.exceptions import ClientError

def lambda_handler(event, context):
  apiResponse = {}
  for record in event('Records'):
    decoded = json.loads(record('body'))
    listId = int(decoded('queryParams')('rec_id'))
    apiCall = "INTERNAL API"
    http = urllib3.PoolManager()
    request = http.request('GET', apiCall)
    apiResponse = json.loads('utf-8'))

      client = boto3.resource('dynamodb')
      table = client.Table('HistoryAuditTable')
      saveStatus = table.put_item(Item={
        'UUID': uuid.uuid4().hex,
        'RecId': listId,
        'MessageType': decoded('queryParams')('type'),
        'MessageTimestampUTC': record('attributes')('SentTimestamp'),
        'Message': apiResponse
      print(saveStatus) # This prints out a 200 status code in CloudWatch
    except ClientError as e:
      # This error never happens.
  # Response Status
  response = {}
  response("body") = json.dumps(decoded)

  return response

My apiResponse is a simple json payload containing some audit data that we keep track of whenever a record changes.

My dynamo table has the following fields:

UUID: self-explanatory, 
RecId: an internal record identifier (we keep this separate from the PK because the same record might be updated again and again),
MessageType: String representing if the record was a "SEED_VALUE", "CREATE", "UPDATE", "ARCHIVE", or "SOFT_DELETE"
MessageTimestampUTC: self-explanatory
Message: JSON blob containing the record details

I had to seed this table with an initial load of the current state of our data (around 400k records) the seeding process used this same aws lambda function. My first indication that something was up was that only around 100k of the 400k records actually made it into the table even though all 400k rows returns a 200 response code, after I noticed this I decided to try and trigger the processes which would push individual records into the table, I can see that my function is firing correctly, I can see that my api calls are returning correctly, and I can see that the saveStatus appears to be successful, however I am not seeing my records in the dynamo table. I guess I have a couple questions here:

  1. Is there some write limit for dynamodb tables? Did I possibly push too much data into it initially and I simply can’t push anything new until my limit resets? (I can’t find a straight answer to this anywhere online)
  2. Am I doing something completely wrong? My process was copied almost exactly from the AWS documentation but AWS’s documentation is notoriously bad.
  3. Is there some sort of error/write logging which can be enabled for dynamodb through cloudwatch? (I can’t find any information on this either anywhere)

A couple other things I’ve checked:

  • The UUIDs aren’t colliding (unlikely I know but I’ve become desperate)
  • The individual records are WELL under the 400kb DynamoDB limit (the largest record I’ve seen is 5kb)

I’m honestly at a loss, I don’t get how/why the process worked for 100k records then all of a sudden decided to not work anymore.