
BigGeo
Why Join Us?
Employers often ask why you’d be a good fit to work for them. We prefer to start by showing why we’d be a great fit for you.
Reasons Why You Would Want to Work at BigGeo
- Be Part of an Industry-Shaping Team – Join a pioneering group driving the future of geospatial intelligence and real-time data solutions.
- Work on Cutting-Edge Technology – Contribute to advanced geospatial analytics, real-time data processing, and 3D visualization, shaping industries worldwide.
- See Your Impact Firsthand – Play a direct role in solving critical global challenges, from urban planning and logistics to environmental conservation and emergency response.
- Work-Life Balance & Flexibility – Operate within a self-care work culture that values autonomy, innovation, and personal well-being.
- Collaborate & Innovate – Work in an environment that fosters rapid problem-solving, experimentation, and creative thinking, backed by Vivid Theory, a venture studio focused on building transformative technologies.
Company Description
BigGeo is at the forefront of geospatial data intelligence, developing powerful solutions that transform raw location-based data into actionable insights. Our platform empowers industries by providing cutting-edge tools for real-time geospatial analysis, predictive modeling, and interactive 3D visualization-unlocking new ways to interpret and utilize massive datasets.
As a Vivid Theory company, BigGeo operates at the intersection of commercial and technical innovation, bringing together experts in data science, GIS, and AI-driven analytics to redefine how businesses interact with geospatial data. Our mission is to bridge the gap between complex geospatial information and real-world decision-making, helping industries optimize operations, improve efficiency, and drive meaningful impact.
At BigGeo, we don’t just build software-we revolutionize how the world understands and interacts with data. If you’re ready to be part of a team pushing the boundaries of geospatial intelligence, this is the opportunity for you.
Primary Responsibilities
- Design and implement efficient, reliable, secure, and observable backend systems
- Optimize code for performance and resource utilization
- Contribute to architectural decisions for distributed systems and big-data processing
- Write and maintain observable, instrumented code that enables effective system monitoring
- Lead the development of complex platform features
- Design and implement scalable data architectures
- Conduct thorough performance testing and optimization
- Mentor junior developers, promote and enforce best practices
- Lead initiatives to align platform development with business objectives, ensuring that all platform functionalities contribute positively to key outcomes and KPIs.
- Facilitate a smooth transition of platform features to product teams, supporting seamless integration and effective use within product pipelines.
- Continuously evaluate and optimize the platform to enhance user experience and deliver measurable business value, supporting overall company growth objectives.
- Drive DevOps practices and automation initiatives
- Monitor and analyze technical performance of internal systems
- Implement and maintain CI/CD pipelines
- Support deployment and operational excellence
- Contribute to infrastructure-as-code initiatives
Requirements:
- Bachelor’s degree in Computer Science, Software Engineering, Data Science, or a related field (or equivalent practical experience).
- Proven track record in high-performance backend development
- Proficiency in modern statically compiled languages
- Strong understanding of immutability principles and their application
- Expertise in writing efficient, reliable, and secure code
- Proficient with both manual memory management and automatic lifetime management techniques
- Strong understanding of computer architecture and efficient utilization of available resources
- Strong knowledge of fundamental data structures
- Understanding of performance trade-offs between algorithmic efficiency, distributed systems coordination, and I/O minimization in big data contexts
- Experience with modern observability patterns and practices
Nice to Haves:
- A Master’s degree or relevant certifications in Distributed Systems, Big Data Processing, or Cloud Computing is a plus.
- Experience with Rust (with tokio.rs) or Scala (with cats-effect) will be given top priority
- Experience with any modern statically typed language is a bonus
- Background in big-data processing architectures
- Experience with distributed systems
- Experience with high-performance data structures
- Knowledge of geospatial data structures and algorithms
- Expertise in optimizing I/O operations
- Familiarity with binary protocols
- Experience with distributed eventing systems (e.g., NATS.io)
- Experience with gRPC and other high-performance RPC frameworks
- Proficiency in using version control systems (e.g., Git)
- Experience with container orchestration and cloud platforms
- Familiarity with infrastructure-as-code practices
- Passionate about code efficiency, reliability, and security
- Proactive in finding ways to improve existing systems
- Eager to learn, mentor and teach
- Strong problem-solving skills and critical thinking
- Excellent communication and teamwork abilities
To apply, please visit the following URL:https://en-ca.whatjobs.com/pub_api__cpl__87789542__4809?utm_campaign=publisher&utm_medium=api&utm_source=4809&geoID=847→