What Matt said, but I want to add that JPEG actually has two compression schemes built in. The first is based on a discrete cosine, which allows certain frequency components of the image to be thrown out. This is the lossy compression with the “quality” parameter that can trade off compression with fidelity. At maximum quality, this compression scheme is mostly eliminated.
JPEG also uses huffman encoding for additional compression. That is a lossless scheme, so it is always there without any need to control it.
So even at maximum “quality”, JPEG will have some useful compression. I just looked at sizes of one example image of a ordinary scene for comparison. The Nikon NEF raw file is 26 Mb, which contains 14 bits/pixel and is uncompressed. My post-processed JPEG version saved at maximum quality is 9.1 Mb. This contains 24 bits/pixel, although of course some information is lost and other information interpolated from values in the original raw image. This same post-processed image converted to a TIFF file with LZW and forward differencing compression (both lossless) resulted in 20.3 Mb.
As a final experiment, I copied the 9.1 Mb post-processed file and the TIFF file resulting from it both to JPEG files with maximum quality setting. Both resulting JPEG files are exactly the same size to the byte of about 8.5 Mb. This shows that even at maximum quality, just a little lossy compression is going on, but not much. It also proves the point that no information was lost at all going to the TIFF file.
As Matt does, I archive the original RAW files from the camera. I also archive my general purpose post-processed version as JPEG with maximum quality. Even pixel peeping at high contrast and sharp edges doesn’t reveal compression artifacts to the human eyeball. I like having the post-processed picture in JPEG form because its probably the most immediately usable format. If there is a issue and I want something different, I’ve always got the raw file to re-derive a another post-processed version from with different tradeoffs.
I used to use 80 as the default quality level of my JPEG images (my software has 0-100 for its quality range), but lately I’ve been using 100 as default unless there is a specific need for a smaller file size. There usually isn’t. I have gone so far as changing the default for the JPG image driver in the source code so that I don’t have to keep specifying the quality level most of the time. It’s not like the old days where a Gb was a lot of memory. (Actually, I’m old enough to remember when 1 Mb was a decent amount of disk space, but back then we weren’t doing digital photography either).