8

I'm following this tutorial: http://blog.bignerdranch.com/754-scenekit-in-mountain-lion/

I'm interested in using Scene Kit, but my scenes might potentially have thousands of spheres. To stress-test Scene Kit I tried this:

SCNSphere *sphere = [SCNSphere sphereWithRadius:0.5];
for (int i=0; i<10; i++) {
    for(int j=0; j<10; j++){
        for(int k=0; k<10; k++){
            SCNNode *myNode = [SCNNode nodeWithGeometry:sphere];
            myNode.position = SCNVector3Make(i,j,k);
            [root addChildNode:myNode];
        }
    }
}

This works fine for, say, 1000 spheres (10^3) but fails (perhaps unsurprisingly) for 1,000,000 spheres (100^3). I don't mind not being able to use a million spheres, but I'd like to work out what the sensible upper bound is (5,000? 15,000?) and how to increase it.

What can I do to mitigate this? e.g. I've tried sphere.segmentCount = 3 and while that speeds up rendering, it doesn't have much effect on memory usage, which I suspect is the limiting factor.

Also, there doesn't seem to be a SCNPoint class. I was thinking about switching to just displaying a point when the number of spheres is too high, but I can't see from the SceneKit documentation how to display a simple point -- the simplest I can see is a triangle.

Any help is much appreciated.

Edit: @toyos suggested that the SCNSphere objects are merged into single SCNGeometry object (provided they don't need to be independently animated, which they don't), but I can't find an easy way to do this.

SCNGeometry is created by using [SCNGeometry geometryWithSources:(* NSArray)sources geometryWithElements:(* NSArray) elements] as documented here, but I'm not clear as to how to create an SCNGeometry object from my spheres.

e.g. for a single sphere, I could see using sphere.geometryElementCount to get the number of elements and then using that to populate an array using [sphere geometryElementAtIndex:(NSInteger)elementIndex] which would give me the elements, but I'm not sure how to get the "sources" (or what they even are). The method to get the geometry sources is [sphere geometrySourcesForSemantic:(NSString*) semantic], but what is this semantic string? (is it meant to be "normals" or "vertices" or is this something else? the documentation quite helpfully says the semantic is "The semantic value of the geometry source." without saying what possible values of the semantic are)

That's just for a single sphere, which would be fairly pointless (since SCNSphere is just a subclass of SCNGeometry anyway), so now I have to add multiple spheres. So would I have to manually translate the vertices of the sphere when adding them to my SCNGeometry object?

I'm just trying to figure out the most sensible way to do this.

4

2 に答える 2

8

The semantic strings are SCNGeometrySourceSemanticVertex|Normal|Texcoord ...

For multiple spheres the answer is yes, you have to transform the vertex/normals with the current node transform before flattening.

Below is a simplified example (i.e it only supports merging the childs of "input" if they all have the same geometry)

- (SCNNode *) flattenNodeHierarchy:(SCNNode *) input
{
    SCNNode *result = [SCNNode node];

    NSUInteger nodeCount = [[input childNodes] count];
    if(nodeCount > 0){
    SCNNode *node = [[input childNodes] objectAtIndex:0];

        NSArray *vertexArray = [node.geometry geometrySourcesForSemantic:SCNGeometrySourceSemanticVertex];
        SCNGeometrySource *vertex = [vertexArray objectAtIndex:0];

        SCNGeometryElement *element = [node.geometry geometryElementAtIndex:0]; //todo: support multiple elements
        NSUInteger primitiveCount = element.primitiveCount;
        NSUInteger newPrimitiveCount = primitiveCount * nodeCount;
        size_t elementBufferLength = newPrimitiveCount * 3 * sizeof(int); //nTriangle x 3 vertex * size of int
        int* elementBuffer = (int*)malloc(elementBufferLength);

        /* simple case: here we consider that all the objects to flatten are the same
         In the regular case we should iterate on every geometry and accumulate the number of vertex/triangles etc...*/

        NSUInteger vertexCount = [vertex vectorCount];
        NSUInteger newVertexCount = vertexCount * nodeCount;

        SCNVector3 *newVertex = malloc(sizeof(SCNVector3) * newVertexCount);        
        SCNVector3 *newNormal = malloc(sizeof(SCNVector3) * newVertexCount); //assume same number of normal/vertex

        //fill
        NSUInteger vertexFillIndex = 0;
        NSUInteger primitiveFillIndex = 0;
        for(NSUInteger index=0; index< nodeCount; index++){
            NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];

            node = [[input childNodes] objectAtIndex:index];

            NSArray *vertexArray = [node.geometry geometrySourcesForSemantic:SCNGeometrySourceSemanticVertex];
            NSArray *normalArray = [node.geometry geometrySourcesForSemantic:SCNGeometrySourceSemanticNormal];
            SCNGeometrySource *vertex = [vertexArray objectAtIndex:0];
            SCNGeometrySource *normals = [normalArray objectAtIndex:0];

            if([vertex bytesPerComponent] != sizeof(float)){
                NSLog(@"todo: support other byte per component");
                continue;
            }

            float *vertexBuffer = (float *)[[vertex data] bytes];
            float *normalBuffer = (float *)[[normals data] bytes];

            CATransform3D t = [node transform];
            GLKMatrix4 matrix = MyGLKMatrix4FromCATransform3D(t);

            //append source
            for(NSUInteger vIndex = 0; vIndex < vertexCount; vIndex++, vertexFillIndex++){
                GLKVector3 v = GLKVector3Make(vertexBuffer[vIndex * 3], vertexBuffer[vIndex * 3+1], vertexBuffer[vIndex * 3 + 2]);
                GLKVector3 n = GLKVector3Make(normalBuffer[vIndex * 3], normalBuffer[vIndex * 3+1], normalBuffer[vIndex * 3 + 2]);

                //transform
                v = GLKMatrix4MultiplyVector3WithTranslation(matrix, v);
                n = GLKMatrix4MultiplyVector3(matrix, n);

                newVertex[vertexFillIndex] = SCNVector3Make(v.x, v.y, v.z);
                newNormal[vertexFillIndex] = SCNVector3Make(n.x, n.y, n.z);
            }

            //append elements
            //here we assume that all elements are SCNGeometryPrimitiveTypeTriangles
            SCNGeometryElement *element = [node.geometry geometryElementAtIndex:0];
            const void *inputPrimitive = [element.data bytes];
            size_t bpi = element.bytesPerIndex;

            NSUInteger offset = index * vertexCount;

            for(NSUInteger pIndex = 0; pIndex < primitiveCount; pIndex++, primitiveFillIndex+=3){                
                elementBuffer[primitiveFillIndex] = offset + _getIndex(inputPrimitive, bpi, pIndex*3);
                elementBuffer[primitiveFillIndex+1] = offset + _getIndex(inputPrimitive, bpi, pIndex*3+1);
                elementBuffer[primitiveFillIndex+2] = offset + _getIndex(inputPrimitive, bpi, pIndex*3+2);
            }

            [pool drain];
        }

        NSArray *sources = @[[SCNGeometrySource geometrySourceWithVertices:newVertex count:newVertexCount],
                             [SCNGeometrySource geometrySourceWithNormals:newNormal count:newVertexCount]];

        NSData *newElementData = [NSMutableData dataWithBytesNoCopy:elementBuffer length:elementBufferLength freeWhenDone:YES];
        NSArray *elements = @[[SCNGeometryElement geometryElementWithData:newElementData
                                                            primitiveType:SCNGeometryPrimitiveTypeTriangles
                                                           primitiveCount:newPrimitiveCount bytesPerIndex:sizeof(int)]];

        result.geometry = [SCNGeometry geometryWithSources:sources elements:elements];

        //cleanup
        free(newVertex);
        free(newNormal);
    }

    return result;
}

//helpers:
GLKMatrix4 MyGLKMatrix4FromCATransform3D(CATransform3D transform) {
    GLKMatrix4 m = {{transform.m11, transform.m12, transform.m13, transform.m14,
        transform.m21, transform.m22, transform.m23, transform.m24,
        transform.m31, transform.m32, transform.m33, transform.m34,
        transform.m41, transform.m42, transform.m43, transform.m44}};
    return m;
}



GLKVector3 MySCNVector3ToGLKVector3(SCNVector3 vector) {
    GLKVector3 v = {{vector.x, vector.y, vector.z}};
    return v;
}
于 2013-02-14T14:26:46.427 に答える
3

How best to do this depends on exactly what you're looking to accomplish.

Are these thousands of points (a star field backdrop for an outer space scene, perhaps) static, or do they need to move with respect to each other? Do they actually need to be spheres? How much detail do they need?

If they don't need to move independently, merging them into a single geometry is a good idea. On Mavericks (OS X 10.9) you don't need do mess with geometry data yourself to do that — create a node for each one, then parent them all to a single node (not your scene's root node), and call flattenedClone to get a copy of that node whose geometries are combined.

If they don't need to have much detail, there are a few options for improving performance.

One is to reduce the segmentCount of the sphere geometry — you don't need 5000 triangles to draw a sphere that'll only be a couple of pixels wide when rendered, which is about what you get with the default segment count of 48. (If you're going to mess with the geometry data or flatten nodes immediately after reducing the segment count, be sure to call [SCNTransaction flush] to make sure it gets updated.)

Another is to reduce the triangle count further. Are the stars (or whatever) small enough that they even need to be spheres? If your scene can be set up so they're always oriented toward the camera, SCNPlane might be better — with its minimum segment count it's just two triangles.

Do they even need to be triangles? Scene Kit can render points — there's not a SCNGeometry subclass for them because it's generally not useful to independently position and transform single points. But you can create a custom geometry using an array of vertex positions and the SCNGeometryPrimitiveTypePoint geometry element type. And if you want to customize the rendering of single points, you can attach shaders (or shader modifiers) to the geometry.

于 2014-03-05T19:30:24.547 に答える