Testing edge cases in TDD?

When doing TDD “by the book” we should only — as far as I understand — write failing tests. This means tests for unimplemented functionality.

I often find myself wanting to add tests for edge cases that expect to work with the current implementation. This means the test does not drive the implementation, it is merely there to confirm what I suspect and document the intent.

In “TDD proper”, is writing these types of tests part of the “refactor” step, or are they not part of test-driving at all, and supposed to be added at a later pass altogether?

Thank you for your feedback on the question. I agree it was unclear. To clarify:

Note that I’m explicitly looking for references to the originators/popularisers of TDD. I can invent sensible ways to deal with this situation myself, so it’s not common sense I’m asking for help with. I want to try to follow the original idea dogmatically for a little bit at first.

Also note that this is not because I’ve written too much code in response to a previously failing test. Kent Beck explicitly recognises in TDD By Example that if you can right the obvious, correct implementation right away, by all means do it.

It’s then I want to verify that it is the correct implementation, by adding the other tests I would have added to drive the implementation had it not been obvious to me right away.

These are also not strictly redundant test cases, because different implementations might handle edge cases differently, but these are the choices I believe are correct, and I want to verify and document that.