Edge techniques in non-photorealistic (NPR) rendering serve the purposes of both defining boundaries between two objects, or two remotely connected parts of the same object, having similar appearance and overlap in the 2D scene and also to transition smoothly away from internal shading techniques (which often won't have good behavior near edges of a mesh) to the boundaries of an object.
Following the them of this series of articles, which is to provide the conceptual framework for how to program away the gap from what we can draw with our hands to what a computer is showing our eyes, I'll first start by pointing out some general tendencies in artistic pieces. This relationship will vary from technique to technique, but by and large, for artistic styles that include anisotropy in the shading, there will be a shift in the pattern of anisotropy edge strokes and fill strokes. With edges, you're drawing to define how the surface is shaped at it's boundary in 2D from the given perspective. With fills, you're drawing to define the 3D surface topology with 2D information.
If you want all edges to look the same, then once you have an edge definition technique, like edge-envelopes (a hardware based workaround), you can have a simple algorithm for drawing the edges that doesn't distinguish one type of edge from another. This makes sense. You know either you are on an edge or not on an edge, and thus the description is more like a true-false test performed along a one-dimensional data set.
However, most of the time you would like your edges to play a role in helping to give you more definition to your topology. Since your edges will usually be tangential to the surface of the object you're drawing edges for, direction can't play much of a role. To add an extra dimension to the description that allows you to have more than one kind of edge, it's easiest to follow a straightforward connection: To draw a bit of an edge, add a little bit of tangential stroke. To add a lot of edge, add a lot of tangential stroke. Wider and darker. A sharp edge should have a thicker or darker stroke, while a glancing edge on a rounded surface will have a thin, light stroke that shows that there is less abrupt change in the topology at that edge.
An edge technique that has this kind of behavior will have the very essential quality of describing very important 3D relationships with 2D output. All 3D rendering does this, but describing it with such simple output is what gives an NPR that quality of impacting us in our hands as soon as we look at it.
Now that you have some ideas about what data you want to result in your output, you need to know where to create this output. There are tons of articles on the subject, but the basic recipe for almost any technique is using the indices/vertices and determining where the normals across edges go from camera facing to non-camera facing. Some attention has to be paid to whether the edge is visible or not. In fact, the culling operation itself provides a very useful description of edges. If the normal shifts from positive to negative across an edge in the un-culled mesh, then after culling, there will be an edge with only one face. Thus the culling process has an implicit connection to where the edges exist on the mesh. This is a very, very solid method that covers a variety of edge conditions that will exist on a mesh with few or no holes (or just extra bits of geometry tacked on at holes to provide the extra data for the edge detection technique.) If you're familiar with the back-culling technique implemented in hardware, you can do this pretty quickly. I would search for a paper on it before attempting to write one from scratch.
This process will take care of simple edge detection, but many times you will want a smooth transition to the edge. You can implement parts of your edge drawing technique in your fill drawing technique. The result will be that the edges technique starts becoming applied before the actual edge of the mesh. This will also give some definition to near edges, but has issues with large areas of near-edge-on surfaces.
Going to wrap up the article for now and add some more ideas later. The biggest challenge I've faced in edge drawing so far is that direct techniques can find the edges quickly, but don't always have a good surface to apply a shader to. The problem is very simple; you're trying to apply a material at an edge, where there will by definition of the problem be no geometry past a certain point on which to draw. You can't draw outside the 2D silhouette. Post-process techniques will suffer the difficulty in detecting edges without some form of input. The issue is that, without some other data to describe edges, there is no resulting description for the difference between two 2D adjacent surfaces with similar appearance. Doing edge detection in 2D is not recommended. Doing edge detection in 3D and piping the output to a 2D process is.