Skip to content

Latest commit

 

History

History
326 lines (290 loc) · 12.7 KB

File metadata and controls

326 lines (290 loc) · 12.7 KB

Level 2 (lev2) graphics


Lev2Gfx Architecture:1

Summary:

  1. Low Graphics Device API abstraction
  • State Management.
  • RenderTarget Management (FBO's, etc..)
  • Buffer Management (Textures, Geometry, UBO, SSBO)
  • Image/Texture Loaders (leveraging OpenImageIO).
  • pipeline based primitive drawing.
    • The combination of [material pass, geometry/primitive types and Framebuffer topology] will map to pipeline objects
    • Instancing support
  • Custom Shader Language (GLFX) - will be portable across all device APIs.
  • Compute Shader support.
  • NVidia Mesh Shader support.
  • NVidia Single Pass Stereo support.
  • OpenGL 4.1-core (MacOs) <- Replaced By MoltenVK
  • OpenGL 4.6-core (Linux) <- Replaced By Vulkan
  • Pipeline object support pending.
  • Vulkan and MoltenVK
  1. Mid level renderer building blocks
  • Natively supports update and render on separate threads via async drawbuffers.
  • DrawBuffer - A frames worth of rendering data generated by the update thread.
  • Materials - Materials wrap shaders with state management and variation support.
  • RenderQueues - State sorted drawbuffers.
  • Compositor - Node based frame composition, AKA a frame graph.
    • Allows for abstract variation of pipeline based on criteria such as mono/stereo, etc..
  1. Higher level renderers
  • Forward & Deferred PBR renderer.
    • Implemented as Node Compositor nodes.
    • Metallic-Roughness workflow.
    • Supports monoscopic and stereo-VR.
    • Point lights.
    • Radiance Probes
    • Spot lights (textured and untextured).
    • Directional lights.
    • Shadowing support
    • Lightmapping support (including lightmap blending)
    • Hybrid permutations pending (eg. Forward+).
    • Deferred
      • Tiled Deferred Shading (Deferred only).
      • Simple light processor (CPU submits light batches without light-tile-culling).
      • CPU based light processor (CPU light-tile-culls and submits light batches).
      • NV Mesh Shader based light processor (GPU light-tile-culls and submits light batches).
      • Signed Distance Field / Deferred shading hybrid rendering support.
      • Single layer transparency / blending support
  • Picking renderer.
    • Implemented as a Node Compositor technique. Pixel perfect picking support.
  1. Many Drawable types

    • Heightfields - Large heightfield support with sliding window vertex texturing technique.
    • Rigid 3d model assets (PBR-MR-GLTF/GLB and other Assimp formats)
      • Diced clustering for culling
      • Instanced rendering
    • Skinned/Animated 3d model assets (PBR-MR-GLTF/GLB)
      • 4-Bone weighted based clustering
      • Animation blending support
      • Instanced rendering
    • Dynamic 3d meshes (supply mesh data from c++/python)
      • Built in mesh processing utilities (partitioning, dicing, skinned clusterization, etc..)
      • IGL also integrated for mesh processing (Heavy duty geometry algorithms)
      • Instanced rendering
    • Node based particles systems
      • Instanced rendering (render multiple instances of a given system)
      • Mesh rendering (render each particle as a mesh)
      • Sprite rendering (render each particle as a billboarded sprite)
      • Streak rendering (render each particle as a billboarded streak)
    • Node based procedural textures
    • Billboard Drawables
      • Text
      • Images
      • Custom textured vector art
      • Instanced rendering support
    • Callback Drawable
      • User specified rendering type utilizing lambda's
      • Standardized method for passing data from update thread to render thread in drawbuffers
      • Instanced rendering supported for callback drawables as well.
      • Used for custom renderables not covered internally by engine
  2. SceneGraph (highest level)

    • Wraps all above functionality into a scene node based paradigm
    • Integrates with ECS via SceneGraphComponent/SceneGraphSystem
    • Also works standalone with C++ or python
  3. Integrations

    • PyTorch/CUDA integration (can source shader storage buffers from torch::Tensor without PCIe bus traffic)
    • IGL (Geometry Processing Library)
    • OpenVDB (Volumetric Data Processing Library)

Example (low level) code:

using namespace std::string_literals;
using namespace ork;
using namespace ork::lev2;
using namespace ork::lev2::deferrednode;
typedef SVtxV12C4T16 vtx_t; // position, vertex color, 2 UV sets

struct Instance {
  fvec3 _curpos;
  fvec3 _curaxis;
  float _curangle = 0.0f;
  fvec3 _target;
  fvec3 _targetaxis;
  float _targetangle = 0.0f;
  float _timeout     = 0.0f;
};

using instances_t       = std::vector<Instance>;

///////////////////////////////////////////////////////////////////

struct GpuResources {

  GpuResources(Context* ctx){
    _renderer       = std::make_shared<DefaultRenderer>();
    _lightmgr       = std::make_shared<LightManager>(_lmd);

    _camlut = std::make_shared<CameraDataLut>();
    _camdata = std::make_shared<CameraData>();
    _camlut->AddSorted("spawncam", _camdata.get());

    _instanced_drawable = std::make_shared<InstancedModelDrawable>();

  //////////////////////////////////////////////////////////
  // initialize compositor (necessary for PBR models)
  //  use a deferredPBR compositing node
  //  which does all the gbuffer and lighting passes
  //////////////////////////////////////////////////////////

    _compositordata = std::make_shared<CompositingData>();
    _compositordata->presetPBR();
    _compositordata->mbEnable = true;
    auto nodetek             = _compositordata->tryNodeTechnique<NodeCompositingTechnique>("scene1"_pool, "item1"_pool);
    auto outpnode            = nodetek->tryOutputNodeAs<ScreenOutputCompositingNode>();
    // outpnode->setSuperSample(4);
    _compositorimpl = _compositordata->createImpl();
    _compositorimpl->bindLighting(_lightmgr));

    _TOPCPD = std::make_shared<lev2::CompositingPassData>();
    _TOPCPD->addStandardLayers();
    _instances          = std::make_shared<instances_t>();

     ctx->debugPushGroup("main.onGpuInit");
    _modelasset = asset::AssetManager<XgmModelAsset>::load("data://tests/pbr1/pbr1");
    _renderer->setContext(ctx);

    _instanced_drawable->bindModel(_modelasset->getSharedModel());

    constexpr size_t KNUMINSTANCES = 30;

    _instanced_drawable->resize(KNUMINSTANCES);
    _instanced_drawable->gpuInit(ctx);

    for (int i = 0; i < KNUMINSTANCES; i++) {
      Instance inst;
      _instances->push_back(inst);
    }
     ctx->debugPopGroup();

  }
  instanced_modeldrawable_ptr_t _instanced_drawable;
  renderer_ptr_t _renderer;
  LightManagerData _lmd;
  lightmanager_ptr_t _lightmgr;
  compositingpassdata_ptr_t _TOPCPD;
  compositorimpl_ptr_t _compositorimpl;
  compositordata_ptr_t _compositordata;
  std::shared_ptr<instances_t> _instances;
  lev2::xgmmodelassetptr_t _modelasset; // retain model
  cameradata_ptr_t _camdata;
  cameradatalut_ptr_t _camlut;
};

///////////////////////////////////////////////////////////////////

int main(int argc, char** argv,char** envp) {
  auto init_data = std::make_shared<ork::AppInitData>(argc,argv,envp);
  auto ezapp  = OrkEzApp::create(init_data);
  auto ezwin              = ezapp->_mainWindow;
  auto gfxwin             = ezwin->_gfxwin;
  std::shared_ptr<GpuResources> gpurec;
  //////////////////////////////////////////////////////////
  ezapp->onGpuInit([&](Context* ctx) {
    gpurec = std::make_shared<GpuResources>(ctx);
  });
  //////////////////////////////////////////////////////////
  ork::Timer timer;
  timer.Start();
  auto dbufcontext = std::make_shared<DrawBufContext>();
  ezapp->onUpdate([&](ui::updatedata_ptr_t updata) {
    double dt      = updata->_dt;
    double abstime = updata->_abstime;
    ///////////////////////////////////////
    // compute camera data
    ///////////////////////////////////////
    float phase    = abstime * PI2 * 0.1f;
    float distance = 10.0f;
    auto eye       = fvec3(sinf(phase), 1.0f, -cosf(phase)) * distance;
    fvec3 tgt(0, 0, 0);
    fvec3 up(0, 1, 0);
    gpurec->_camdata->Lookat(eye, tgt, up);
    gpurec->_camdata->Persp(1, 20.0, 45.0);
    ///////////////////////////////////////
    auto DB = dbufcontext->acquireForWriteLocked();
    DB->Reset();
    DB->copyCameras(*gpurec->_camlut);
    auto layer = DB->MergeLayer("Default");
    ////////////////////////////////////////
    // animate and enqueue all instances
    ////////////////////////////////////////

    auto drawable = gpurec->_instanced_drawable;
    auto instdata = drawable->_instancedata;

    int index = 0;
    for (auto& inst : *gpurec->_instances) {
      fvec3 delta   = inst._target - inst._curpos;
      inst._curpos += delta.normalized() * dt * 1.0;

      delta         = inst._targetaxis - inst._curaxis;
      inst._curaxis = (inst._curaxis + delta.normalized() * dt * 0.1).normalized();
      inst._curangle += (inst._targetangle - inst._curangle) * dt * 0.1;

      if (inst._timeout < abstime) {
        inst._timeout  = abstime + float(rand() % 255) / 64.0;
        inst._target.x = (float(rand() % 255) / 2.55) - 50;
        inst._target.y = (float(rand() % 255) / 2.55) - 50;
        inst._target.z = (float(rand() % 255) / 2.55) - 50;
        inst._target *= (4.5f/50.0f);

        fvec3 axis;
        axis.x            = (float(rand() % 255) / 255.0f) - 0.5f;
        axis.y            = (float(rand() % 255) / 255.0f) - 0.5f;
        axis.z            = (float(rand() % 255) / 255.0f) - 0.5f;
        inst._targetaxis  = axis.normalized();
        inst._targetangle = PI2 * (float(rand() % 255) / 255.0f) - 0.5f;
      }

      fquat q;
      q.fromAxisAngle(fvec4(inst._curaxis, inst._curangle));
      instdata->_worldmatrices[index++].compose(inst._curpos, q, 0.3f);
    }
    DrawQueueXfData ident;
    drawable->enqueueOnLayer(ident, *layer);
    ////////////////////////////////////////
    dbufcontext->releaseFromWriteLocked(DB);
  });
  //////////////////////////////////////////////////////////
  // draw handler (called on main(rendering) thread)
  //////////////////////////////////////////////////////////
  ezapp->onDraw([&](ui::drawevent_constptr_t drwev) {
    ///////////////////////////////////////
    // acquire readonly drawbuffer
    ///////////////////////////////////////
    auto DB = dbufcontext->acquireForReadLocked();
    if (nullptr == DB)
      return; // none available so no point rendering anything...
    ///////////////////////////////////////
    // fetch interfaces
    ///////////////////////////////////////
    auto context = drwev->GetTarget();
    auto fbi  = context->FBI();  // FrameBufferInterface
    auto fxi  = context->FXI();  // FX Interface
    auto mtxi = context->MTXI(); // matrix Interface
    auto gbi  = context->GBI();  // GeometryBuffer Interface
    ///////////////////////////////////////
    float time = timer.SecsSinceStart();
    RenderContextFrameData RCFD(context); // renderer per/frame data
    RCFD._cimpl = gpurec->_compositorimpl;
    RCFD.setUserProperty("DB"_crc, lev2::rendervar_t(DB));
    RCFD.setUserProperty("time"_crc, time);
    RCFD.setUserProperty("pbr_model"_crc, 1);
    context->pushRenderContextFrameData(&RCFD);
    ///////////////////////////////////////
    // compositor and frame setup
    ///////////////////////////////////////
    lev2::UiViewportRenderTarget rt(nullptr);
    auto tgtrect           = context->mainSurfaceRectAtOrigin();
    gpurec->_TOPCPD->_time = time;
    gpurec->_TOPCPD->_irendertarget = &rt;
    gpurec->_TOPCPD->SetDstRect(tgtrect);
    gpurec->_compositorimpl->pushCPD(*gpurec->_TOPCPD);
    ///////////////////////////////////////
    FrameRenderer framerenderer(RCFD, [&]() {});
    CompositorDrawData drawdata(framerenderer);
    drawdata._properties["primarycamindex"_crcu].set<int>(0);
    drawdata._properties["cullcamindex"_crcu].set<int>(0);
    drawdata._properties["irenderer"_crcu].set<lev2::IRenderer*>(gpurec->_renderer.get());
    drawdata._properties["simrunning"_crcu].set<bool>(true);
    drawdata._properties["DB"_crcu].set<const DrawableBuffer*>(DB);
    drawdata._cimpl = gpurec->_compositorimpl;
    ///////////////////////////////////////
    // Draw!
    ///////////////////////////////////////
    fbi->SetClearColor(fvec4(0, 0, 0, 1));
    fbi->setViewport(tgtrect);
    fbi->setScissor(tgtrect);
    context->beginFrame();
    gpurec->_compositorimpl->assemble(drawdata);
    gpurec->_compositorimpl->composite(drawdata);
    gpurec->_compositorimpl->popCPD();
    context->popRenderContextFrameData();
    context->endFrame();
    dbufcontext->releaseFromReadLocked(DB);
  });
  //////////////////////////////////////////////////////////
  ezapp->onResize([&](int w, int h) {
    gpurec->_compositorimpl->compositingContext().Resize(w, h);
  });
  ezapp->onGpuExit([&](Context* ctx) {
    gpurec = nullptr;
  });
  //////////////////////////////////////////////////////////
  ezapp->setRefreshPolicy({EREFRESH_FASTEST, -1});
  return ezapp->mainThreadLoop();
}