Solr metrics added, along with tests. (#6)

Signed-off-by: Houston Putman <houstonputman@gmail.com>
diff --git a/Gopkg.lock b/Gopkg.lock
index d909363..a70c326 100644
--- a/Gopkg.lock
+++ b/Gopkg.lock
@@ -358,6 +358,14 @@
   version = "v0.8.1"
 
 [[projects]]
+  digest = "1:22aa691fe0213cb5c07d103f9effebcb7ad04bee45a0ce5fe5369d0ca2ec3a1f"
+  name = "github.com/pmezard/go-difflib"
+  packages = ["difflib"]
+  pruneopts = "T"
+  revision = "792786c7400a136282c1664665ae0a8db921c6c2"
+  version = "v1.0.0"
+
+[[projects]]
   digest = "1:559af96935e6b54a946ae2512369312e708d2e002d25abc869d021fff3b69913"
   name = "github.com/pravega/zookeeper-operator"
   packages = [
@@ -453,6 +461,14 @@
   version = "v1.0.3"
 
 [[projects]]
+  digest = "1:1d6f160431ac08b6e39b374f2ae9e00c9780cd604d65a5364d7304cd939efd0c"
+  name = "github.com/stretchr/testify"
+  packages = ["assert"]
+  pruneopts = "T"
+  revision = "221dbe5ed46703ee255b1da0dec05086f5035f62"
+  version = "v1.4.0"
+
+[[projects]]
   digest = "1:365b8ecb35a5faf5aa0ee8d798548fc9cd4200cb95d77a5b0b285ac881bae499"
   name = "go.uber.org/atomic"
   packages = ["."]
@@ -1048,6 +1064,7 @@
     "github.com/onsi/gomega",
     "github.com/pravega/zookeeper-operator/pkg/apis",
     "github.com/pravega/zookeeper-operator/pkg/apis/zookeeper/v1beta1",
+    "github.com/stretchr/testify/assert",
     "golang.org/x/net/context",
     "k8s.io/api/apps/v1",
     "k8s.io/api/batch/v1",
diff --git a/README.md b/README.md
index a8f872b..237ade7 100644
--- a/README.md
+++ b/README.md
@@ -8,31 +8,35 @@
 ## Menu
 
 - [Getting Started](#getting-started)
+    - [Solr Cloud](#running-a-solr-cloud)
+    - [Solr Backups](#solr-backups)
+    - [Solr Metrics](#solr-prometheus-exporter)
 - [Contributions](#contributions)
 - [License](#license)
 - [Code of Conduct](#code-of-conduct)
 - [Security Vulnerability Reporting](#security-vulnerability-reporting)
 
-## Getting Started
-
-### Running the Solr Operator
+# Getting Started
 
 Install the Zookeeper & Etcd Operators, which this operator depends on by default.
+Each is optional, as described in the [Zookeeper](#zookeeper-reference) section.
 
 ```bash
 $ kubectl apply -f example/ext_ops.yaml
 ```
 
-Install the SolrCloud CRD & Operator
+Install the Solr CRDs & Operator
 
 ```bash
 $ kubectl apply -f config/crds/solr_v1beta1_solrcloud.yaml
+$ kubectl apply -f config/crds/solr_v1beta1_solrbackup.yaml
+$ kubectl apply -f config/crds/solr_v1beta1_solrprometheusexporter.yaml
 $ kubectl apply -f config/operators/solr_operator.yaml
 ```
                         
-### Lifecyle of a Solr Cloud
+## Running a Solr Cloud
 
-#### Creating
+### Creating
 
 Make sure that the solr-operator and a zookeeper-operator are running.
 
@@ -66,7 +70,7 @@
 example   8.1.1     4              4       4            8m
 ```
 
-#### Scaling
+### Scaling
 
 Increase the number of Solr nodes in your cluster.
 
@@ -74,7 +78,7 @@
 $ kubectl scale --replicas=5 solrcloud/example
 ```
 
-#### Deleting
+### Deleting
 
 Decrease the number of Solr nodes in your cluster.
 
@@ -82,7 +86,7 @@
 $ kubectl delete solrcloud example
 ```
 
-#### Dependent Kubernetes Resources
+### Dependent Kubernetes Resources
 
 What actually gets created when the Solr Cloud is spun up?
 
@@ -131,7 +135,37 @@
 solrcloud.solr.bloomberg.com/example       8.1.1     4              4       4            47h
 ```
 
-### Solr Backups
+### Zookeeper Reference
+
+Solr Clouds require an Apache Zookeeper to connect to.
+
+The Solr operator gives a few options.
+
+#### ZK Connection Info
+
+This is an external/internal connection string as well as an optional chRoot to an already running Zookeeeper ensemble.
+If you provide an external connection string, you do not _have_ to provide an internal one as well.
+
+#### Provided Instance
+
+If you do not require the Solr cloud to run cross-kube cluster, and do not want to manage your own Zookeeper ensemble,
+the solr-operator can manage Zookeeper ensemble(s) for you.
+
+##### Zookeeper
+
+Using the [zookeeper-operator](https://github.com/pravega/zookeeper-operator), a new Zookeeper ensemble can be spun up for 
+each solrCloud that has this option specified.
+
+The startup parameter `zookeeper-operator` must be provided on startup of the solr-operator for this parameter to be available.
+
+##### Zetcd
+
+Using [etcd-operator](https://github.com/coreos/etcd-operator), a new Etcd ensemble can be spun up for each solrCloud that has this option specified.
+A [Zetcd](https://github.com/etcd-io/zetcd) deployment is also created so that Solr can interact with Etcd as if it were a Zookeeper ensemble.
+
+The startup parameter `etcd-operator` must be provided on startup of the solr-operator for this parameter to be available.
+
+## Solr Backups
 
 Solr backups require 3 things:
 - A solr cloud running in kubernetes to backup
@@ -146,41 +180,27 @@
 Backups will be tarred before they are persisted.
 
 There is no current way to restore these backups, but that is in the roadmap to implement.
+
+
+## Solr Prometheus Exporter
+
+Solr metrics can be collected from solr clouds/standalone solr both residing within the kubernetes cluster and outside.
+To use the Prometheus exporter, the easiest thing to do is just provide a reference to a Solr instance. That can be any of the following:
+- The name and namespace of the Solr Cloud CRD
+- The Zookeeper connection information of the Solr Cloud
+- The address of the standalone Solr instance
+
+You can also provide a custom Prometheus Exporter config, Solr version, and exporter options as described in the
+[Solr ref-guide](https://lucene.apache.org/solr/guide/monitoring-solr-with-prometheus-and-grafana.html#command-line-parameters).
+
+Note that a few of the official Solr docker images do not enable the Prometheus Exporter.
+Versions `6.6` - `7.x` and `8.2` - `master` should have the exporter available. 
   
   
 ## Solr Images
 
 The solr-operator will work with any of the [official Solr images](https://hub.docker.com/_/solr) currently available.
 
-## Zookeeper
-
-Solr Clouds require an Apache Zookeeper to connect to.
-
-The Solr operator gives a few options.
-
-### ZK Connection Info
-
-This is an external/internal connection string as well as an optional chRoot to an already running Zookeeeper ensemble.
-If you provide an external connection string, you do not _have_ to provide an internal one as well.
-
-### Provided Instance
-
-If you do not require the Solr cloud to run cross-kube cluster, and do not want to manage your own Zookeeper ensemble,
-the solr-operator can manage Zookeeper ensemble(s) for you.
-
-#### Zookeeper
-
-Using the [zookeeper-operator](https://github.com/pravega/zookeeper-operator), a new Zookeeper ensemble can be spun up for 
-each solrCloud that has this option specified.
-
-The startup parameter `zookeeper-operator` must be provided on startup of the solr-operator for this parameter to be available.
-
-#### Zetcd
-
-Using [etcd-operator](https://github.com/coreos/etcd-operator), a new Etcd ensemble can be spun up for each solrCloud that has this option specified.
-A [Zetcd](https://github.com/etcd-io/zetcd) deployment is also created so that Solr can interact with Etcd as if it were a Zookeeper ensemble.
-
-The startup parameter `etcd-operator` must be provided on startup of the solr-operator for this parameter to be available.
 
 ## Solr Operator
 
@@ -225,6 +245,21 @@
 $ NAMESPACE=your-namespace make docker-base-build docker-build docker-push
 ```
 
+## Version Compatability
+
+### Backwards Incompatible CRD Changes
+
+#### v0.1.1
+- `SolrCloud.Spec.persistentVolumeClaim` was renamed to `SolrCloud.Spec.dataPvcSpec`
+
+### Compatibility with Kubernetes Versions
+
+#### Fully Compatible - v1.12+
+
+#### Feature Gates required for older versions
+
+- *v1.10* - CustomResourceSubresources
+
 ## Contributions
 
 We :heart: contributions.
diff --git a/config/crds/solr_v1beta1_solrcloud.yaml b/config/crds/solr_v1beta1_solrcloud.yaml
index fccd922..28c49fe 100644
--- a/config/crds/solr_v1beta1_solrcloud.yaml
+++ b/config/crds/solr_v1beta1_solrcloud.yaml
@@ -122,9 +122,6 @@
                       description: The connection string to connect to the ensemble
                         from within the Kubernetes cluster
                       type: string
-                  required:
-                  - internalConnectionString
-                  - chroot
                   type: object
                 provided:
                   description: 'A zookeeper that is created by the solr operator Note:
@@ -239,9 +236,6 @@
                   description: The connection string to connect to the ensemble from
                     within the Kubernetes cluster
                   type: string
-              required:
-              - internalConnectionString
-              - chroot
               type: object
           required:
           - solrNodes
diff --git a/config/crds/solr_v1beta1_solrprometheusexporter.yaml b/config/crds/solr_v1beta1_solrprometheusexporter.yaml
new file mode 100644
index 0000000..5cf9f21
--- /dev/null
+++ b/config/crds/solr_v1beta1_solrprometheusexporter.yaml
@@ -0,0 +1,134 @@
+apiVersion: apiextensions.k8s.io/v1beta1
+kind: CustomResourceDefinition
+metadata:
+  creationTimestamp: null
+  labels:
+    controller-tools.k8s.io: "1.0"
+  name: solrprometheusexporters.solr.bloomberg.com
+spec:
+  additionalPrinterColumns:
+  - JSONPath: .status.ready
+    description: Whether the prometheus exporter is ready
+    name: Ready
+    type: boolean
+  - JSONPath: .spec.scrapeInterval
+    description: Scrape interval for metrics (in ms)
+    name: Scrape Interval
+    type: integer
+  - JSONPath: .metadata.creationTimestamp
+    name: Age
+    type: date
+  group: solr.bloomberg.com
+  names:
+    kind: SolrPrometheusExporter
+    plural: solrprometheusexporters
+    shortNames:
+    - solrmetrics
+  scope: Namespaced
+  subresources:
+    status: {}
+  validation:
+    openAPIV3Schema:
+      properties:
+        apiVersion:
+          description: 'APIVersion defines the versioned schema of this representation
+            of an object. Servers should convert recognized schemas to the latest
+            internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources'
+          type: string
+        kind:
+          description: 'Kind is a string value representing the REST resource this
+            object represents. Servers may infer this from the endpoint the client
+            submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds'
+          type: string
+        metadata:
+          type: object
+        spec:
+          properties:
+            exporterEntrypoint:
+              description: The entrypoint into the exporter. Defaults to the official
+                docker-solr location.
+              type: string
+            image:
+              description: Image of Solr Prometheus Exporter to run.
+              properties:
+                pullPolicy:
+                  type: string
+                repository:
+                  type: string
+                tag:
+                  type: string
+              type: object
+            metricsConfig:
+              description: The xml config for the metrics
+              type: string
+            numThreads:
+              description: Number of threads to use for the prometheus exporter Defaults
+                to 1
+              format: int32
+              type: integer
+            scrapeInterval:
+              description: The interval to scrape Solr at (in seconds) Defaults to
+                60 seconds
+              format: int32
+              type: integer
+            solrReference:
+              description: Reference of the Solr instance to collect metrics for
+              properties:
+                cloud:
+                  description: Reference of a solrCloud instance
+                  properties:
+                    name:
+                      description: The name of a solr cloud running within the kubernetes
+                        cluster
+                      type: string
+                    namespace:
+                      description: The namespace of a solr cloud running within the
+                        kubernetes cluster
+                      type: string
+                    zkConnectionInfo:
+                      description: The ZK Connection information for a cloud, could
+                        be used for solr's outside of the kube cluster
+                      properties:
+                        chroot:
+                          description: The ChRoot to connect solr at
+                          type: string
+                        externalConnectionString:
+                          description: The connection string to connect to the ensemble
+                            from outside of the Kubernetes cluster If external and
+                            no internal connection string is provided, the external
+                            cnx string will be used as the internal cnx string
+                          type: string
+                        internalConnectionString:
+                          description: The connection string to connect to the ensemble
+                            from within the Kubernetes cluster
+                          type: string
+                      type: object
+                  type: object
+                standalone:
+                  description: Reference of a standalone solr instance
+                  properties:
+                    address:
+                      description: The address of the standalone solr
+                      type: string
+                  required:
+                  - address
+                  type: object
+              type: object
+          required:
+          - solrReference
+          type: object
+        status:
+          properties:
+            ready:
+              description: Is the prometheus exporter up and running
+              type: boolean
+          required:
+          - ready
+          type: object
+  version: v1beta1
+status:
+  acceptedNames:
+    kind: ""
+    plural: ""
+  conditions: []
+  storedVersions: []
diff --git a/config/default/manager/manager.yaml b/config/default/manager/manager.yaml
index b4c4aaa..d2d3fcb 100644
--- a/config/default/manager/manager.yaml
+++ b/config/default/manager/manager.yaml
@@ -19,9 +19,9 @@
     spec:
       containers:
       - args:
-        - -zookeeper-operator
+        - -zk-operator=true
         - -etcd-operator=false
-        - -ingress-base-url=ing.local.domain
+        - -ingress-base-domain=ing.local.domain
         image: bloomberg/solr-operator:latest
         imagePullPolicy: Always
         name: solr-operator
diff --git a/config/default/rbac/rbac_role.yaml b/config/default/rbac/rbac_role.yaml
index 4a70f2c..92ac695 100644
--- a/config/default/rbac/rbac_role.yaml
+++ b/config/default/rbac/rbac_role.yaml
@@ -7,70 +7,10 @@
 - apiGroups:
   - ""
   resources:
-  - persistentvolumeclaims
-  verbs:
-  - get
-  - list
-  - watch
-  - create
-  - update
-  - patch
-  - delete
-- apiGroups:
-  - ""
-  resources:
-  - persistentvolumeclaims/status
-  verbs:
-  - get
-  - update
-  - patch
-- apiGroups:
-  - ""
-  resources:
-  - pods
-  verbs:
-  - get
-  - list
-  - watch
-  - create
-  - update
-  - patch
-  - delete
-- apiGroups:
-  - ""
-  resources:
-  - pods/status
-  verbs:
-  - get
-  - update
-  - patch
-- apiGroups:
-  - ""
-  resources:
   - pods/exec
   verbs:
   - create
 - apiGroups:
-  - apps
-  resources:
-  - deployments
-  verbs:
-  - get
-  - list
-  - watch
-  - create
-  - update
-  - patch
-  - delete
-- apiGroups:
-  - apps
-  resources:
-  - deployments/status
-  verbs:
-  - get
-  - update
-  - patch
-- apiGroups:
   - batch
   resources:
   - jobs
@@ -93,6 +33,20 @@
 - apiGroups:
   - solr.bloomberg.com
   resources:
+  - solrclouds
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrclouds/status
+  verbs:
+  - get
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
   - solrbackups
   verbs:
   - get
@@ -118,18 +72,12 @@
   - get
   - list
   - watch
-  - create
-  - update
-  - patch
-  - delete
 - apiGroups:
   - ""
   resources:
   - pods/status
   verbs:
   - get
-  - update
-  - patch
 - apiGroups:
   - ""
   resources:
@@ -291,6 +239,94 @@
   - update
   - patch
 - apiGroups:
+  - ""
+  resources:
+  - configmaps
+  verbs:
+  - get
+  - list
+  - watch
+  - create
+  - update
+  - patch
+  - delete
+- apiGroups:
+  - ""
+  resources:
+  - configmaps/status
+  verbs:
+  - get
+- apiGroups:
+  - ""
+  resources:
+  - services
+  verbs:
+  - get
+  - list
+  - watch
+  - create
+  - update
+  - patch
+  - delete
+- apiGroups:
+  - ""
+  resources:
+  - services/status
+  verbs:
+  - get
+- apiGroups:
+  - apps
+  resources:
+  - deployments
+  verbs:
+  - get
+  - list
+  - watch
+  - create
+  - update
+  - patch
+  - delete
+- apiGroups:
+  - apps
+  resources:
+  - deployments/status
+  verbs:
+  - get
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrclouds
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrclouds/status
+  verbs:
+  - get
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrprometheusexporters
+  verbs:
+  - get
+  - list
+  - watch
+  - create
+  - update
+  - patch
+  - delete
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrprometheusexporters/status
+  verbs:
+  - get
+  - update
+  - patch
+- apiGroups:
   - admissionregistration.k8s.io
   resources:
   - mutatingwebhookconfigurations
diff --git a/config/operators/solr_operator.yaml b/config/operators/solr_operator.yaml
index 849b305..bc30138 100644
--- a/config/operators/solr_operator.yaml
+++ b/config/operators/solr_operator.yaml
@@ -7,70 +7,10 @@
 - apiGroups:
   - ""
   resources:
-  - persistentvolumeclaims
-  verbs:
-  - get
-  - list
-  - watch
-  - create
-  - update
-  - patch
-  - delete
-- apiGroups:
-  - ""
-  resources:
-  - persistentvolumeclaims/status
-  verbs:
-  - get
-  - update
-  - patch
-- apiGroups:
-  - ""
-  resources:
-  - pods
-  verbs:
-  - get
-  - list
-  - watch
-  - create
-  - update
-  - patch
-  - delete
-- apiGroups:
-  - ""
-  resources:
-  - pods/status
-  verbs:
-  - get
-  - update
-  - patch
-- apiGroups:
-  - ""
-  resources:
   - pods/exec
   verbs:
   - create
 - apiGroups:
-  - apps
-  resources:
-  - deployments
-  verbs:
-  - get
-  - list
-  - watch
-  - create
-  - update
-  - patch
-  - delete
-- apiGroups:
-  - apps
-  resources:
-  - deployments/status
-  verbs:
-  - get
-  - update
-  - patch
-- apiGroups:
   - batch
   resources:
   - jobs
@@ -93,6 +33,20 @@
 - apiGroups:
   - solr.bloomberg.com
   resources:
+  - solrclouds
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrclouds/status
+  verbs:
+  - get
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
   - solrbackups
   verbs:
   - get
@@ -118,18 +72,12 @@
   - get
   - list
   - watch
-  - create
-  - update
-  - patch
-  - delete
 - apiGroups:
   - ""
   resources:
   - pods/status
   verbs:
   - get
-  - update
-  - patch
 - apiGroups:
   - ""
   resources:
@@ -291,6 +239,94 @@
   - update
   - patch
 - apiGroups:
+  - ""
+  resources:
+  - configmaps
+  verbs:
+  - get
+  - list
+  - watch
+  - create
+  - update
+  - patch
+  - delete
+- apiGroups:
+  - ""
+  resources:
+  - configmaps/status
+  verbs:
+  - get
+- apiGroups:
+  - ""
+  resources:
+  - services
+  verbs:
+  - get
+  - list
+  - watch
+  - create
+  - update
+  - patch
+  - delete
+- apiGroups:
+  - ""
+  resources:
+  - services/status
+  verbs:
+  - get
+- apiGroups:
+  - apps
+  resources:
+  - deployments
+  verbs:
+  - get
+  - list
+  - watch
+  - create
+  - update
+  - patch
+  - delete
+- apiGroups:
+  - apps
+  resources:
+  - deployments/status
+  verbs:
+  - get
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrclouds
+  verbs:
+  - get
+  - list
+  - watch
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrclouds/status
+  verbs:
+  - get
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrprometheusexporters
+  verbs:
+  - get
+  - list
+  - watch
+  - create
+  - update
+  - patch
+  - delete
+- apiGroups:
+  - solr.bloomberg.com
+  resources:
+  - solrprometheusexporters/status
+  verbs:
+  - get
+  - update
+  - patch
+- apiGroups:
   - admissionregistration.k8s.io
   resources:
   - mutatingwebhookconfigurations
@@ -371,9 +407,9 @@
     spec:
       containers:
       - args:
-        - -zookeeper-operator
+        - -zk-operator=true
         - -etcd-operator=false
-        - -ingress-base-url=ing.local.domain
+        - -ingress-base-domain=ing.local.domain
         env:
         - name: POD_NAMESPACE
           valueFrom:
diff --git a/example/test_solrcloud.yaml b/example/test_solrcloud.yaml
index 7d615d9..bb95f14 100644
--- a/example/test_solrcloud.yaml
+++ b/example/test_solrcloud.yaml
@@ -5,4 +5,4 @@
 spec:
   replicas: 4
   solrImage:
-    tag: 8.1.1
\ No newline at end of file
+    tag: 8.2.0
\ No newline at end of file
diff --git a/example/test_solrprometheusexporter.yaml b/example/test_solrprometheusexporter.yaml
new file mode 100644
index 0000000..177a2d6
--- /dev/null
+++ b/example/test_solrprometheusexporter.yaml
@@ -0,0 +1,13 @@
+apiVersion: solr.bloomberg.com/v1beta1
+kind: SolrPrometheusExporter
+metadata:
+  labels:
+    controller-tools.k8s.io: "1.0"
+  name: solrprometheusexporter-sample
+spec:
+  solrReference:
+    cloud:
+      kubeSolr:
+        name: "example"
+  image:
+    tag: 8.2.0
diff --git a/pkg/apis/solr/v1beta1/solrcloud_types.go b/pkg/apis/solr/v1beta1/solrcloud_types.go
index 8ba298f..39992e6 100644
--- a/pkg/apis/solr/v1beta1/solrcloud_types.go
+++ b/pkg/apis/solr/v1beta1/solrcloud_types.go
@@ -423,7 +423,7 @@
 type ZookeeperConnectionInfo struct {
 	// The connection string to connect to the ensemble from within the Kubernetes cluster
 	// +optional
-	InternalConnectionString string `json:"internalConnectionString"`
+	InternalConnectionString string `json:"internalConnectionString,omitempty"`
 
 	// The connection string to connect to the ensemble from outside of the Kubernetes cluster
 	// If external and no internal connection string is provided, the external cnx string will be used as the internal cnx string
@@ -431,7 +431,8 @@
 	ExternalConnectionString *string `json:"externalConnectionString,omitempty"`
 
 	// The ChRoot to connect solr at
-	ChRoot string `json:"chroot"`
+	// +optional
+	ChRoot string `json:"chroot,omitempty"`
 }
 
 // +genclient
@@ -526,7 +527,11 @@
 	return sc.Status.ZkConnectionString()
 }
 func (scs SolrCloudStatus) ZkConnectionString() string {
-	return scs.ZookeeperConnectionInfo.InternalConnectionString + scs.ZookeeperConnectionInfo.ChRoot
+	return scs.ZookeeperConnectionInfo.ZkConnectionString()
+}
+
+func (zkInfo ZookeeperConnectionInfo) ZkConnectionString() string {
+	return zkInfo.InternalConnectionString + zkInfo.ChRoot
 }
 
 func (sc *SolrCloud) CommonIngressPrefix() string {
diff --git a/pkg/apis/solr/v1beta1/solrprometheusexporter_types.go b/pkg/apis/solr/v1beta1/solrprometheusexporter_types.go
new file mode 100644
index 0000000..4b7b724
--- /dev/null
+++ b/pkg/apis/solr/v1beta1/solrprometheusexporter_types.go
@@ -0,0 +1,216 @@
+/*
+Copyright 2019 Bloomberg Finance LP.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package v1beta1
+
+import (
+	"fmt"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+const (
+	SolrPrometheusExporterTechnologyLabel = "solr-prometheus-exporter"
+)
+
+// SolrPrometheusExporterSpec defines the desired state of SolrPrometheusExporter
+type SolrPrometheusExporterSpec struct {
+	// Reference of the Solr instance to collect metrics for
+	SolrReference `json:"solrReference"`
+
+	// Image of Solr Prometheus Exporter to run.
+	// +optional
+	Image *ContainerImage `json:"image,omitempty"`
+
+	// The entrypoint into the exporter. Defaults to the official docker-solr location.
+	// +optional
+	ExporterEntrypoint string `json:"exporterEntrypoint,omitempty"`
+
+	// Number of threads to use for the prometheus exporter
+	// Defaults to 1
+	// +optional
+	NumThreads int32 `json:"numThreads,omitempty"`
+
+	// The interval to scrape Solr at (in seconds)
+	// Defaults to 60 seconds
+	// +optional
+	ScrapeInterval int32 `json:"scrapeInterval,omitempty"`
+
+	// The xml config for the metrics
+	// +optional
+	Config string `json:"metricsConfig,omitempty"`
+}
+
+func (ps *SolrPrometheusExporterSpec) withDefaults(namespace string) (changed bool) {
+	changed = ps.SolrReference.withDefaults(namespace) || changed
+
+	if ps.Image == nil {
+		ps.Image = &ContainerImage{}
+	}
+	changed = ps.Image.withDefaults(DefaultSolrRepo, DefaultSolrVersion, DefaultPullPolicy) || changed
+
+	if ps.NumThreads == 0 {
+		ps.NumThreads = 1
+		changed = true
+	}
+
+	return changed
+}
+
+// SolrReference defines a reference to an internal or external solrCloud or standalone solr
+// One, and only one, of Cloud or Standalone must be provided.
+type SolrReference struct {
+	// Reference of a solrCloud instance
+	// +optional
+	Cloud *SolrCloudReference `json:"cloud,omitempty"`
+
+	// Reference of a standalone solr instance
+	// +optional
+	Standalone *StandaloneSolrReference `json:"standalone,omitempty"`
+}
+
+func (sr *SolrReference) withDefaults(namespace string) (changed bool) {
+	if sr.Cloud != nil {
+		changed = sr.Cloud.withDefaults(namespace) || changed
+	}
+	return changed
+}
+
+// SolrCloudReference defines a reference to an internal or external solrCloud.
+// Internal (to the kube cluster) clouds should be specified via the Name and Namespace options.
+// External clouds should be specified by their Zookeeper connection information.
+type SolrCloudReference struct {
+	// The name of a solr cloud running within the kubernetes cluster
+	// +optional
+	Name string `json:"name,omitempty"`
+
+	// The namespace of a solr cloud running within the kubernetes cluster
+	// +optional
+	Namespace string `json:"namespace,omitempty"`
+
+	// The ZK Connection information for a cloud, could be used for solr's outside of the kube cluster
+	// +optional
+	ZookeeperConnectionInfo *ZookeeperConnectionInfo `json:"zkConnectionInfo,omitempty"`
+}
+
+func (scr *SolrCloudReference) withDefaults(namespace string) (changed bool) {
+	if scr.Name != "" {
+		if scr.Namespace == "" {
+			scr.Namespace = namespace
+			changed = true
+		}
+	}
+
+	if scr.ZookeeperConnectionInfo != nil {
+		changed = scr.ZookeeperConnectionInfo.withDefaults() || changed
+	}
+	return changed
+}
+
+// SolrPrometheusExporterStatus defines the observed state of SolrPrometheusExporter
+type StandaloneSolrReference struct {
+	// The address of the standalone solr
+	Address string `json:"address"`
+}
+
+// SolrPrometheusExporterStatus defines the observed state of SolrPrometheusExporter
+type SolrPrometheusExporterStatus struct {
+	// An address the prometheus exporter can be connected to from within the Kube cluster
+	// InternalAddress string `json:"internalAddress"`
+
+	// An address the prometheus exporter can be connected to from outside of the Kube cluster
+	// Will only be provided when an ingressUrl is provided for the cloud
+	// +optional
+	// ExternalAddress string `json:"externalAddress,omitempty"`
+
+	// Is the prometheus exporter up and running
+	Ready bool `json:"ready"`
+}
+
+func (spe *SolrPrometheusExporter) SharedLabels() map[string]string {
+	return spe.SharedLabelsWith(map[string]string{})
+}
+
+func (spe *SolrPrometheusExporter) SharedLabelsWith(labels map[string]string) map[string]string {
+	newLabels := map[string]string{}
+
+	if labels != nil {
+		for k, v := range labels {
+			newLabels[k] = v
+		}
+	}
+
+	newLabels[SolrPrometheusExporterTechnologyLabel] = spe.Name
+	return newLabels
+}
+
+// MetricsDeploymentName returns the name of the metrics deployment for the cloud
+func (sc *SolrPrometheusExporter) MetricsDeploymentName() string {
+	return fmt.Sprintf("%s-solr-metrics", sc.GetName())
+}
+
+// MetricsConfigMapName returns the name of the metrics service for the cloud
+func (sc *SolrPrometheusExporter) MetricsConfigMapName() string {
+	return fmt.Sprintf("%s-solr-metrics", sc.GetName())
+}
+
+// MetricsServiceName returns the name of the metrics service for the cloud
+func (sc *SolrPrometheusExporter) MetricsServiceName() string {
+	return fmt.Sprintf("%s-solr-metrics", sc.GetName())
+}
+
+func (sc *SolrPrometheusExporter) MetricsIngressPrefix() string {
+	return fmt.Sprintf("%s-%s-solr-metrics", sc.Namespace, sc.Name)
+}
+
+func (sc *SolrPrometheusExporter) MetricsIngressUrl(ingressBaseUrl string) string {
+	return fmt.Sprintf("%s.%s", sc.MetricsIngressPrefix(), ingressBaseUrl)
+}
+
+// +genclient
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+// SolrPrometheusExporter is the Schema for the solrprometheusexporters API
+// +k8s:openapi-gen=true
+// +kubebuilder:resource:shortName=solrmetrics
+// +kubebuilder:subresource:status
+// +kubebuilder:printcolumn:name="Ready",type="boolean",JSONPath=".status.ready",description="Whether the prometheus exporter is ready"
+// +kubebuilder:printcolumn:name="Scrape Interval",type="integer",JSONPath=".spec.scrapeInterval",description="Scrape interval for metrics (in ms)"
+// +kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTimestamp"
+type SolrPrometheusExporter struct {
+	metav1.TypeMeta   `json:",inline"`
+	metav1.ObjectMeta `json:"metadata,omitempty"`
+
+	Spec   SolrPrometheusExporterSpec   `json:"spec,omitempty"`
+	Status SolrPrometheusExporterStatus `json:"status,omitempty"`
+}
+
+// WithDefaults set default values when not defined in the spec.
+func (spe *SolrPrometheusExporter) WithDefaults() bool {
+	return spe.Spec.withDefaults(spe.Namespace)
+}
+
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+// SolrPrometheusExporterList contains a list of SolrPrometheusExporter
+type SolrPrometheusExporterList struct {
+	metav1.TypeMeta `json:",inline"`
+	metav1.ListMeta `json:"metadata,omitempty"`
+	Items           []SolrPrometheusExporter `json:"items"`
+}
+
+func init() {
+	SchemeBuilder.Register(&SolrPrometheusExporter{}, &SolrPrometheusExporterList{})
+}
diff --git a/pkg/apis/solr/v1beta1/solrprometheusexporter_types_test.go b/pkg/apis/solr/v1beta1/solrprometheusexporter_types_test.go
new file mode 100644
index 0000000..6994155
--- /dev/null
+++ b/pkg/apis/solr/v1beta1/solrprometheusexporter_types_test.go
@@ -0,0 +1,58 @@
+/*
+Copyright 2019 Bloomberg Finance LP.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package v1beta1
+
+import (
+	"testing"
+
+	"github.com/onsi/gomega"
+	"golang.org/x/net/context"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+	"k8s.io/apimachinery/pkg/types"
+)
+
+func TestStorageSolrPrometheusExporter(t *testing.T) {
+	key := types.NamespacedName{
+		Name:      "foo",
+		Namespace: "default",
+	}
+	created := &SolrPrometheusExporter{
+		ObjectMeta: metav1.ObjectMeta{
+			Name:      "foo",
+			Namespace: "default",
+		}}
+	g := gomega.NewGomegaWithT(t)
+
+	// Test Create
+	fetched := &SolrPrometheusExporter{}
+	g.Expect(c.Create(context.TODO(), created)).NotTo(gomega.HaveOccurred())
+
+	g.Expect(c.Get(context.TODO(), key, fetched)).NotTo(gomega.HaveOccurred())
+	g.Expect(fetched).To(gomega.Equal(created))
+
+	// Test Updating the Labels
+	updated := fetched.DeepCopy()
+	updated.Labels = map[string]string{"hello": "world"}
+	g.Expect(c.Update(context.TODO(), updated)).NotTo(gomega.HaveOccurred())
+
+	g.Expect(c.Get(context.TODO(), key, fetched)).NotTo(gomega.HaveOccurred())
+	g.Expect(fetched).To(gomega.Equal(updated))
+
+	// Test Delete
+	g.Expect(c.Delete(context.TODO(), fetched)).NotTo(gomega.HaveOccurred())
+	g.Expect(c.Get(context.TODO(), key, fetched)).To(gomega.HaveOccurred())
+}
diff --git a/pkg/apis/solr/v1beta1/zz_generated.deepcopy.go b/pkg/apis/solr/v1beta1/zz_generated.deepcopy.go
index bf65ed4..6312d7e 100644
--- a/pkg/apis/solr/v1beta1/zz_generated.deepcopy.go
+++ b/pkg/apis/solr/v1beta1/zz_generated.deepcopy.go
@@ -424,6 +424,27 @@
 }
 
 // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *SolrCloudReference) DeepCopyInto(out *SolrCloudReference) {
+	*out = *in
+	if in.ZookeeperConnectionInfo != nil {
+		in, out := &in.ZookeeperConnectionInfo, &out.ZookeeperConnectionInfo
+		*out = new(ZookeeperConnectionInfo)
+		(*in).DeepCopyInto(*out)
+	}
+	return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SolrCloudReference.
+func (in *SolrCloudReference) DeepCopy() *SolrCloudReference {
+	if in == nil {
+		return nil
+	}
+	out := new(SolrCloudReference)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
 func (in *SolrCloudSpec) DeepCopyInto(out *SolrCloudSpec) {
 	*out = *in
 	if in.Replicas != nil {
@@ -513,6 +534,147 @@
 }
 
 // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *SolrPrometheusExporter) DeepCopyInto(out *SolrPrometheusExporter) {
+	*out = *in
+	out.TypeMeta = in.TypeMeta
+	in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
+	in.Spec.DeepCopyInto(&out.Spec)
+	out.Status = in.Status
+	return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SolrPrometheusExporter.
+func (in *SolrPrometheusExporter) DeepCopy() *SolrPrometheusExporter {
+	if in == nil {
+		return nil
+	}
+	out := new(SolrPrometheusExporter)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *SolrPrometheusExporter) DeepCopyObject() runtime.Object {
+	if c := in.DeepCopy(); c != nil {
+		return c
+	}
+	return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *SolrPrometheusExporterList) DeepCopyInto(out *SolrPrometheusExporterList) {
+	*out = *in
+	out.TypeMeta = in.TypeMeta
+	out.ListMeta = in.ListMeta
+	if in.Items != nil {
+		in, out := &in.Items, &out.Items
+		*out = make([]SolrPrometheusExporter, len(*in))
+		for i := range *in {
+			(*in)[i].DeepCopyInto(&(*out)[i])
+		}
+	}
+	return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SolrPrometheusExporterList.
+func (in *SolrPrometheusExporterList) DeepCopy() *SolrPrometheusExporterList {
+	if in == nil {
+		return nil
+	}
+	out := new(SolrPrometheusExporterList)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *SolrPrometheusExporterList) DeepCopyObject() runtime.Object {
+	if c := in.DeepCopy(); c != nil {
+		return c
+	}
+	return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *SolrPrometheusExporterSpec) DeepCopyInto(out *SolrPrometheusExporterSpec) {
+	*out = *in
+	in.SolrReference.DeepCopyInto(&out.SolrReference)
+	if in.Image != nil {
+		in, out := &in.Image, &out.Image
+		*out = new(ContainerImage)
+		**out = **in
+	}
+	return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SolrPrometheusExporterSpec.
+func (in *SolrPrometheusExporterSpec) DeepCopy() *SolrPrometheusExporterSpec {
+	if in == nil {
+		return nil
+	}
+	out := new(SolrPrometheusExporterSpec)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *SolrPrometheusExporterStatus) DeepCopyInto(out *SolrPrometheusExporterStatus) {
+	*out = *in
+	return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SolrPrometheusExporterStatus.
+func (in *SolrPrometheusExporterStatus) DeepCopy() *SolrPrometheusExporterStatus {
+	if in == nil {
+		return nil
+	}
+	out := new(SolrPrometheusExporterStatus)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *SolrReference) DeepCopyInto(out *SolrReference) {
+	*out = *in
+	if in.Cloud != nil {
+		in, out := &in.Cloud, &out.Cloud
+		*out = new(SolrCloudReference)
+		(*in).DeepCopyInto(*out)
+	}
+	if in.Standalone != nil {
+		in, out := &in.Standalone, &out.Standalone
+		*out = new(StandaloneSolrReference)
+		**out = **in
+	}
+	return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SolrReference.
+func (in *SolrReference) DeepCopy() *SolrReference {
+	if in == nil {
+		return nil
+	}
+	out := new(SolrReference)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *StandaloneSolrReference) DeepCopyInto(out *StandaloneSolrReference) {
+	*out = *in
+	return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new StandaloneSolrReference.
+func (in *StandaloneSolrReference) DeepCopy() *StandaloneSolrReference {
+	if in == nil {
+		return nil
+	}
+	out := new(StandaloneSolrReference)
+	in.DeepCopyInto(out)
+	return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
 func (in *VolumePersistenceSource) DeepCopyInto(out *VolumePersistenceSource) {
 	*out = *in
 	in.VolumeSource.DeepCopyInto(&out.VolumeSource)
diff --git a/pkg/controller/add_solrprometheusexporter.go b/pkg/controller/add_solrprometheusexporter.go
new file mode 100644
index 0000000..a2dc248
--- /dev/null
+++ b/pkg/controller/add_solrprometheusexporter.go
@@ -0,0 +1,26 @@
+/*
+Copyright 2019 Bloomberg Finance LP.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package controller
+
+import (
+	"github.com/bloomberg/solr-operator/pkg/controller/solrprometheusexporter"
+)
+
+func init() {
+	// AddToManagerFuncs is a list of functions to create controllers and add them to a manager.
+	AddToManagerFuncs = append(AddToManagerFuncs, solrprometheusexporter.Add)
+}
diff --git a/pkg/controller/solrbackup/solrbackup_controller.go b/pkg/controller/solrbackup/solrbackup_controller.go
index 41b2e57..fba6c8e 100644
--- a/pkg/controller/solrbackup/solrbackup_controller.go
+++ b/pkg/controller/solrbackup/solrbackup_controller.go
@@ -97,18 +97,12 @@
 
 // Reconcile reads that state of the cluster for a SolrBackup object and makes changes based on the state read
 // and what is in the SolrBackup.Spec
-// TODO(user): Modify this Reconcile function to implement your Controller logic.  The scaffolding writes
-// a Deployment as an example
-// Automatically generate RBAC rules to allow the Controller to read and write Deployments
-// +kubebuilder:rbac:groups=,resources=persistentvolumeclaims,verbs=get;list;watch;create;update;patch;delete
-// +kubebuilder:rbac:groups=,resources=persistentvolumeclaims/status,verbs=get;update;patch
-// +kubebuilder:rbac:groups=,resources=pods,verbs=get;list;watch;create;update;patch;delete
-// +kubebuilder:rbac:groups=,resources=pods/status,verbs=get;update;patch
+// Automatically generate RBAC rules to allow the Controller to read and write Jobs and execute in pods and read SolrClouds
 // +kubebuilder:rbac:groups=,resources=pods/exec,verbs=create
-// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
-// +kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get;update;patch
 // +kubebuilder:rbac:groups=batch,resources=jobs,verbs=get;list;watch;create;update;patch;delete
 // +kubebuilder:rbac:groups=batch,resources=jobs/status,verbs=get;update;patch
+// +kubebuilder:rbac:groups=solr.bloomberg.com,resources=solrclouds,verbs=get;list;watch
+// +kubebuilder:rbac:groups=solr.bloomberg.com,resources=solrclouds/status,verbs=get
 // +kubebuilder:rbac:groups=solr.bloomberg.com,resources=solrbackups,verbs=get;list;watch;create;update;patch;delete
 // +kubebuilder:rbac:groups=solr.bloomberg.com,resources=solrbackups/status,verbs=get;update;patch
 func (r *ReconcileSolrBackup) Reconcile(request reconcile.Request) (reconcile.Result, error) {
diff --git a/pkg/controller/solrcloud/solrcloud_controller.go b/pkg/controller/solrcloud/solrcloud_controller.go
index a20f066..36e2f61 100644
--- a/pkg/controller/solrcloud/solrcloud_controller.go
+++ b/pkg/controller/solrcloud/solrcloud_controller.go
@@ -171,8 +171,8 @@
 
 // Reconcile reads that state of the cluster for a SolrCloud object and makes changes based on the state read
 // and what is in the SolrCloud.Spec
-// +kubebuilder:rbac:groups=,resources=pods,verbs=get;list;watch;create;update;patch;delete
-// +kubebuilder:rbac:groups=,resources=pods/status,verbs=get;update;patch
+// +kubebuilder:rbac:groups=,resources=pods,verbs=get;list;watch
+// +kubebuilder:rbac:groups=,resources=pods/status,verbs=get
 // +kubebuilder:rbac:groups=,resources=services,verbs=get;list;watch;create;update;patch;delete
 // +kubebuilder:rbac:groups=,resources=services/status,verbs=get;update;patch
 // +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
diff --git a/pkg/controller/solrprometheusexporter/solrprometheusexporter_controller.go b/pkg/controller/solrprometheusexporter/solrprometheusexporter_controller.go
new file mode 100644
index 0000000..15591fe
--- /dev/null
+++ b/pkg/controller/solrprometheusexporter/solrprometheusexporter_controller.go
@@ -0,0 +1,233 @@
+/*
+Copyright 2019 Bloomberg Finance LP.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package solrprometheusexporter
+
+import (
+	"context"
+	solrv1beta1 "github.com/bloomberg/solr-operator/pkg/apis/solr/v1beta1"
+	"github.com/bloomberg/solr-operator/pkg/controller/util"
+	appsv1 "k8s.io/api/apps/v1"
+	corev1 "k8s.io/api/core/v1"
+	"k8s.io/apimachinery/pkg/api/errors"
+	"k8s.io/apimachinery/pkg/runtime"
+	"k8s.io/apimachinery/pkg/types"
+	"sigs.k8s.io/controller-runtime/pkg/client"
+	"sigs.k8s.io/controller-runtime/pkg/controller"
+	"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
+	"sigs.k8s.io/controller-runtime/pkg/handler"
+	"sigs.k8s.io/controller-runtime/pkg/manager"
+	"sigs.k8s.io/controller-runtime/pkg/reconcile"
+	logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"
+	"sigs.k8s.io/controller-runtime/pkg/source"
+)
+
+var log = logf.Log.WithName("controller")
+
+// Add creates a new SolrPrometheusExporter Controller and adds it to the Manager with default RBAC. The Manager will set fields on the Controller
+// and Start it when the Manager is Started.
+func Add(mgr manager.Manager) error {
+	return add(mgr, newReconciler(mgr))
+}
+
+// newReconciler returns a new reconcile.Reconciler
+func newReconciler(mgr manager.Manager) reconcile.Reconciler {
+	return &ReconcileSolrPrometheusExporter{Client: mgr.GetClient(), scheme: mgr.GetScheme()}
+}
+
+// add adds a new Controller to mgr with r as the reconcile.Reconciler
+func add(mgr manager.Manager, r reconcile.Reconciler) error {
+	// Create a new controller
+	c, err := controller.New("solrprometheusexporter-controller", mgr, controller.Options{Reconciler: r})
+	if err != nil {
+		return err
+	}
+
+	// Watch for changes to SolrPrometheusExporter
+	err = c.Watch(&source.Kind{Type: &solrv1beta1.SolrPrometheusExporter{}}, &handler.EnqueueRequestForObject{})
+	if err != nil {
+		return err
+	}
+
+	err = c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{
+		IsController: true,
+		OwnerType:    &solrv1beta1.SolrPrometheusExporter{},
+	})
+	if err != nil {
+		return err
+	}
+
+	err = c.Watch(&source.Kind{Type: &corev1.Service{}}, &handler.EnqueueRequestForOwner{
+		IsController: true,
+		OwnerType:    &solrv1beta1.SolrPrometheusExporter{},
+	})
+	if err != nil {
+		return err
+	}
+
+	err = c.Watch(&source.Kind{Type: &corev1.ConfigMap{}}, &handler.EnqueueRequestForOwner{
+		IsController: true,
+		OwnerType:    &solrv1beta1.SolrPrometheusExporter{},
+	})
+	if err != nil {
+		return err
+	}
+
+	return nil
+}
+
+var _ reconcile.Reconciler = &ReconcileSolrPrometheusExporter{}
+
+// ReconcileSolrPrometheusExporter reconciles a SolrPrometheusExporter object
+type ReconcileSolrPrometheusExporter struct {
+	client.Client
+	scheme *runtime.Scheme
+}
+
+// Reconcile reads that state of the cluster for a SolrPrometheusExporter object and makes changes based on the state read
+// and what is in the SolrPrometheusExporter.Spec
+// Automatically generate RBAC rules to allow the Controller to read and write Deployments & config maps and read solrClouds
+// +kubebuilder:rbac:groups=,resources=configmaps,verbs=get;list;watch;create;update;patch;delete
+// +kubebuilder:rbac:groups=,resources=configmaps/status,verbs=get
+// +kubebuilder:rbac:groups=,resources=services,verbs=get;list;watch;create;update;patch;delete
+// +kubebuilder:rbac:groups=,resources=services/status,verbs=get
+// +kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;create;update;patch;delete
+// +kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get
+// +kubebuilder:rbac:groups=solr.bloomberg.com,resources=solrclouds,verbs=get;list;watch
+// +kubebuilder:rbac:groups=solr.bloomberg.com,resources=solrclouds/status,verbs=get
+// +kubebuilder:rbac:groups=solr.bloomberg.com,resources=solrprometheusexporters,verbs=get;list;watch;create;update;patch;delete
+// +kubebuilder:rbac:groups=solr.bloomberg.com,resources=solrprometheusexporters/status,verbs=get;update;patch
+func (r *ReconcileSolrPrometheusExporter) Reconcile(request reconcile.Request) (reconcile.Result, error) {
+	// Fetch the SolrPrometheusExporter instance
+	prometheusExporter := &solrv1beta1.SolrPrometheusExporter{}
+	err := r.Get(context.TODO(), request.NamespacedName, prometheusExporter)
+	if err != nil {
+		if errors.IsNotFound(err) {
+			// Object not found, return.  Created objects are automatically garbage collected.
+			// For additional cleanup logic use finalizers.
+			return reconcile.Result{}, nil
+		}
+		// Error reading the object - requeue the request.
+		return reconcile.Result{}, err
+	}
+
+	changed := prometheusExporter.WithDefaults()
+	if changed {
+		log.Info("Setting default settings for Solr PrometheusExporter", "namespace", prometheusExporter.Namespace, "name", prometheusExporter.Name)
+		if err := r.Update(context.TODO(), prometheusExporter); err != nil {
+			return reconcile.Result{}, err
+		}
+		return reconcile.Result{Requeue: true}, nil
+	}
+
+	if prometheusExporter.Spec.Config != "" {
+		// Generate ConfigMap
+		configMap := util.GenerateMetricsConfigMap(prometheusExporter)
+		if err := controllerutil.SetControllerReference(prometheusExporter, configMap, r.scheme); err != nil {
+			return reconcile.Result{}, err
+		}
+
+		// Check if the ConfigMap already exists
+		foundConfigMap := &corev1.ConfigMap{}
+		err = r.Get(context.TODO(), types.NamespacedName{Name: configMap.Name, Namespace: configMap.Namespace}, foundConfigMap)
+		if err != nil && errors.IsNotFound(err) {
+			log.Info("Creating PrometheusExporter ConfigMap", "namespace", configMap.Namespace, "name", configMap.Name)
+			err = r.Create(context.TODO(), configMap)
+		} else if err == nil && util.CopyConfigMapFields(configMap, foundConfigMap) {
+			// Update the found ConfigMap and write the result back if there are any changes
+			log.Info("Updating PrometheusExporter ConfigMap", "namespace", configMap.Namespace, "name", configMap.Name)
+			err = r.Update(context.TODO(), foundConfigMap)
+		}
+		if err != nil {
+			return reconcile.Result{}, err
+		}
+	}
+
+	// Generate Metrics Service
+	metricsService := util.GenerateSolrMetricsService(prometheusExporter)
+	if err := controllerutil.SetControllerReference(prometheusExporter, metricsService, r.scheme); err != nil {
+		return reconcile.Result{}, err
+	}
+
+	// Check if the Metrics Service already exists
+	foundMetricsService := &corev1.Service{}
+	err = r.Get(context.TODO(), types.NamespacedName{Name: metricsService.Name, Namespace: metricsService.Namespace}, foundMetricsService)
+	if err != nil && errors.IsNotFound(err) {
+		log.Info("Creating PrometheusExporter Service", "namespace", metricsService.Namespace, "name", metricsService.Name)
+		err = r.Create(context.TODO(), metricsService)
+	} else if err == nil && util.CopyServiceFields(metricsService, foundMetricsService) {
+		// Update the found Metrics Service and write the result back if there are any changes
+		log.Info("Updating PrometheusExporter Service", "namespace", metricsService.Namespace, "name", metricsService.Name)
+		err = r.Update(context.TODO(), foundMetricsService)
+	}
+	if err != nil {
+		return reconcile.Result{}, err
+	}
+
+	// Get the ZkConnectionString to connect to
+	solrConnectionInfo := util.SolrConnectionInfo{}
+	if solrConnectionInfo, err = getSolrConnectionInfo(r, prometheusExporter); err != nil {
+		return reconcile.Result{}, err
+	}
+
+	deploy := util.GenerateSolrPrometheusExporterDeployment(prometheusExporter, solrConnectionInfo)
+	if err := controllerutil.SetControllerReference(prometheusExporter, deploy, r.scheme); err != nil {
+		return reconcile.Result{}, err
+	}
+
+	foundDeploy := &appsv1.Deployment{}
+	err = r.Get(context.TODO(), types.NamespacedName{Name: deploy.Name, Namespace: deploy.Namespace}, foundDeploy)
+	if err != nil && errors.IsNotFound(err) {
+		log.Info("Creating PrometheusExporter Deployment", "namespace", deploy.Namespace, "name", deploy.Name)
+		err = r.Create(context.TODO(), deploy)
+	} else if err == nil {
+		if util.CopyDeploymentFields(deploy, foundDeploy) {
+			log.Info("Updating PrometheusExporter Deployment", "namespace", deploy.Namespace, "name", deploy.Name)
+			err = r.Update(context.TODO(), foundDeploy)
+			if err != nil {
+				return reconcile.Result{}, err
+			}
+		}
+		ready := foundDeploy.Status.ReadyReplicas > 0
+
+		if ready != prometheusExporter.Status.Ready {
+			prometheusExporter.Status.Ready = ready
+			log.Info("Updating status for solr-prometheus-exporter", "namespace", prometheusExporter.Namespace, "name", prometheusExporter.Name)
+			err = r.Status().Update(context.TODO(), prometheusExporter)
+		}
+	}
+	return reconcile.Result{}, err
+}
+
+func getSolrConnectionInfo(r *ReconcileSolrPrometheusExporter, prometheusExporter *solrv1beta1.SolrPrometheusExporter) (solrConnectionInfo util.SolrConnectionInfo, err error) {
+	solrConnectionInfo = util.SolrConnectionInfo{}
+
+	if prometheusExporter.Spec.SolrReference.Standalone != nil {
+		solrConnectionInfo.StandaloneAddress = prometheusExporter.Spec.SolrReference.Standalone.Address
+	}
+	if prometheusExporter.Spec.SolrReference.Cloud != nil {
+		if prometheusExporter.Spec.SolrReference.Cloud.ZookeeperConnectionInfo != nil {
+			solrConnectionInfo.CloudZkConnnectionString = prometheusExporter.Spec.SolrReference.Cloud.ZookeeperConnectionInfo.ZkConnectionString()
+		} else if prometheusExporter.Spec.SolrReference.Cloud.Name != "" {
+			solrCloud := &solrv1beta1.SolrCloud{}
+			err = r.Get(context.TODO(), types.NamespacedName{Name: prometheusExporter.Spec.SolrReference.Cloud.Name, Namespace: prometheusExporter.Spec.SolrReference.Cloud.Namespace}, solrCloud)
+			if err == nil {
+				solrConnectionInfo.CloudZkConnnectionString = solrCloud.Status.ZookeeperConnectionInfo.ZkConnectionString()
+			}
+		}
+	}
+	return solrConnectionInfo, err
+}
diff --git a/pkg/controller/solrprometheusexporter/solrprometheusexporter_controller_suite_test.go b/pkg/controller/solrprometheusexporter/solrprometheusexporter_controller_suite_test.go
new file mode 100644
index 0000000..c7ae127
--- /dev/null
+++ b/pkg/controller/solrprometheusexporter/solrprometheusexporter_controller_suite_test.go
@@ -0,0 +1,75 @@
+/*
+Copyright 2019 Bloomberg Finance LP.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package solrprometheusexporter
+
+import (
+	stdlog "log"
+	"os"
+	"path/filepath"
+	"sync"
+	"testing"
+
+	"github.com/bloomberg/solr-operator/pkg/apis"
+	"github.com/onsi/gomega"
+	"k8s.io/client-go/kubernetes/scheme"
+	"k8s.io/client-go/rest"
+	"sigs.k8s.io/controller-runtime/pkg/envtest"
+	"sigs.k8s.io/controller-runtime/pkg/manager"
+	"sigs.k8s.io/controller-runtime/pkg/reconcile"
+)
+
+var cfg *rest.Config
+
+func TestMain(m *testing.M) {
+	t := &envtest.Environment{
+		CRDDirectoryPaths: []string{filepath.Join("..", "..", "..", "config", "crds")},
+	}
+	apis.AddToScheme(scheme.Scheme)
+
+	var err error
+	if cfg, err = t.Start(); err != nil {
+		stdlog.Fatal(err)
+	}
+
+	code := m.Run()
+	t.Stop()
+	os.Exit(code)
+}
+
+// SetupTestReconcile returns a reconcile.Reconcile implementation that delegates to inner and
+// writes the request to requests after Reconcile is finished.
+func SetupTestReconcile(inner reconcile.Reconciler) (reconcile.Reconciler, chan reconcile.Request) {
+	requests := make(chan reconcile.Request)
+	fn := reconcile.Func(func(req reconcile.Request) (reconcile.Result, error) {
+		result, err := inner.Reconcile(req)
+		requests <- req
+		return result, err
+	})
+	return fn, requests
+}
+
+// StartTestManager adds recFn
+func StartTestManager(mgr manager.Manager, g *gomega.GomegaWithT) (chan struct{}, *sync.WaitGroup) {
+	stop := make(chan struct{})
+	wg := &sync.WaitGroup{}
+	wg.Add(1)
+	go func() {
+		defer wg.Done()
+		g.Expect(mgr.Start(stop)).NotTo(gomega.HaveOccurred())
+	}()
+	return stop, wg
+}
diff --git a/pkg/controller/solrprometheusexporter/solrprometheusexporter_controller_test.go b/pkg/controller/solrprometheusexporter/solrprometheusexporter_controller_test.go
new file mode 100644
index 0000000..34a0744
--- /dev/null
+++ b/pkg/controller/solrprometheusexporter/solrprometheusexporter_controller_test.go
@@ -0,0 +1,207 @@
+/*
+Copyright 2019 Bloomberg Finance LP.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package solrprometheusexporter
+
+import (
+	"testing"
+	"time"
+
+	solrv1beta1 "github.com/bloomberg/solr-operator/pkg/apis/solr/v1beta1"
+	"github.com/onsi/gomega"
+	"github.com/stretchr/testify/assert"
+	"golang.org/x/net/context"
+	appsv1 "k8s.io/api/apps/v1"
+	corev1 "k8s.io/api/core/v1"
+	apierrors "k8s.io/apimachinery/pkg/api/errors"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+	"k8s.io/apimachinery/pkg/types"
+	"sigs.k8s.io/controller-runtime/pkg/client"
+	"sigs.k8s.io/controller-runtime/pkg/manager"
+	"sigs.k8s.io/controller-runtime/pkg/reconcile"
+)
+
+var c client.Client
+
+var expectedRequest = reconcile.Request{NamespacedName: types.NamespacedName{Name: "foo", Namespace: "default"}}
+var depKey = types.NamespacedName{Name: "foo-solr-metrics", Namespace: "default"}
+var serviceKey = types.NamespacedName{Name: "foo-solr-metrics", Namespace: "default"}
+var configMapKey = types.NamespacedName{Name: "foo-solr-metrics", Namespace: "default"}
+var additionalLables = map[string]string{
+	"additional": "label",
+	"another":    "test",
+}
+
+const testExporterConfig = "THis is a test config."
+
+const timeout = time.Second * 5
+
+func TestReconcileWithoutExporterConfig(t *testing.T) {
+	g := gomega.NewGomegaWithT(t)
+	instance := &solrv1beta1.SolrPrometheusExporter{ObjectMeta: metav1.ObjectMeta{Name: "foo", Namespace: "default"}}
+
+	// Setup the Manager and Controller.  Wrap the Controller Reconcile function so it writes each request to a
+	// channel when it is finished.
+	mgr, err := manager.New(cfg, manager.Options{})
+	g.Expect(err).NotTo(gomega.HaveOccurred())
+	c = mgr.GetClient()
+
+	recFn, requests := SetupTestReconcile(newReconciler(mgr))
+	g.Expect(add(mgr, recFn)).NotTo(gomega.HaveOccurred())
+
+	stopMgr, mgrStopped := StartTestManager(mgr, g)
+
+	defer func() {
+		close(stopMgr)
+		mgrStopped.Wait()
+	}()
+
+	// Create the SolrPrometheusExporter object and expect the Reconcile and Deployment to be created
+	err = c.Create(context.TODO(), instance)
+	// The instance object may not be a valid object because it might be missing some required fields.
+	// Please modify the instance object by adding required fields and then remove the following if statement.
+	if apierrors.IsInvalid(err) {
+		t.Logf("failed to create object, got an invalid object error: %v", err)
+		return
+	}
+	g.Expect(err).NotTo(gomega.HaveOccurred())
+	defer c.Delete(context.TODO(), instance)
+	g.Eventually(requests, timeout).Should(gomega.Receive(gomega.Equal(expectedRequest)))
+
+	expectNoConfigMap(g, requests, configMapKey)
+
+	foundDeployment := expectDeployment(t, g, requests, depKey, false)
+
+	expectService(t, g, requests, serviceKey, foundDeployment)
+}
+
+func TestReconcileWithExporterConfig(t *testing.T) {
+	g := gomega.NewGomegaWithT(t)
+	instance := &solrv1beta1.SolrPrometheusExporter{
+		ObjectMeta: metav1.ObjectMeta{Name: "foo", Namespace: "default"},
+		Spec: solrv1beta1.SolrPrometheusExporterSpec{
+			Config: testExporterConfig,
+		},
+	}
+
+	// Setup the Manager and Controller.  Wrap the Controller Reconcile function so it writes each request to a
+	// channel when it is finished.
+	mgr, err := manager.New(cfg, manager.Options{})
+	g.Expect(err).NotTo(gomega.HaveOccurred())
+	c = mgr.GetClient()
+
+	recFn, requests := SetupTestReconcile(newReconciler(mgr))
+	g.Expect(add(mgr, recFn)).NotTo(gomega.HaveOccurred())
+
+	stopMgr, mgrStopped := StartTestManager(mgr, g)
+
+	defer func() {
+		close(stopMgr)
+		mgrStopped.Wait()
+	}()
+
+	// Create the SolrPrometheusExporter object and expect the Reconcile and Deployment to be created
+	err = c.Create(context.TODO(), instance)
+	// The instance object may not be a valid object because it might be missing some required fields.
+	// Please modify the instance object by adding required fields and then remove the following if statement.
+	if apierrors.IsInvalid(err) {
+		t.Logf("failed to create object, got an invalid object error: %v", err)
+		return
+	}
+	g.Expect(err).NotTo(gomega.HaveOccurred())
+	defer c.Delete(context.TODO(), instance)
+	g.Eventually(requests, timeout).Should(gomega.Receive(gomega.Equal(expectedRequest)))
+
+	expectConfigMap(t, g, requests, configMapKey)
+
+	foundDeployment := expectDeployment(t, g, requests, depKey, true)
+
+	expectService(t, g, requests, serviceKey, foundDeployment)
+}
+
+func expectConfigMap(t *testing.T, g *gomega.GomegaWithT, requests chan reconcile.Request, configMapKey types.NamespacedName) {
+	configMap := &corev1.ConfigMap{}
+	g.Eventually(func() error { return c.Get(context.TODO(), configMapKey, configMap) }, timeout).
+		Should(gomega.Succeed())
+
+	// Verify the ConfigMap Specs
+	assert.Equal(t, configMap.Data["solr-prometheus-exporter.xml"], testExporterConfig, "Metrics ConfigMap does not have the correct data.")
+
+	// Delete the ConfigMap and expect Reconcile to be called for Deployment deletion
+	g.Expect(c.Delete(context.TODO(), configMap)).NotTo(gomega.HaveOccurred())
+	g.Eventually(requests, timeout).Should(gomega.Receive(gomega.Equal(expectedRequest)))
+	g.Eventually(func() error { return c.Get(context.TODO(), configMapKey, configMap) }, timeout).
+		Should(gomega.Succeed())
+
+	// Manually delete ConfigMap since GC isn't enabled in the test control plane
+	g.Eventually(func() error { return c.Delete(context.TODO(), configMap) }, timeout).
+		Should(gomega.MatchError("configmaps \"" + configMapKey.Name + "\" not found"))
+}
+
+func expectNoConfigMap(g *gomega.GomegaWithT, requests chan reconcile.Request, configMapKey types.NamespacedName) {
+	configMap := &corev1.ConfigMap{}
+	g.Eventually(func() error { return c.Get(context.TODO(), configMapKey, configMap) }, timeout).
+		Should(gomega.MatchError("ConfigMap \"" + configMapKey.Name + "\" not found"))
+}
+
+func expectDeployment(t *testing.T, g *gomega.GomegaWithT, requests chan reconcile.Request, deploymentKey types.NamespacedName, usesConfig bool) *appsv1.Deployment {
+	deploy := &appsv1.Deployment{}
+	g.Eventually(func() error { return c.Get(context.TODO(), deploymentKey, deploy) }, timeout).
+		Should(gomega.Succeed())
+
+	// Verify the deployment Specs
+	assert.Equal(t, deploy.Spec.Template.Labels, deploy.Spec.Selector.MatchLabels, "Metrics Deployment has different Pod template labels and selector labels.")
+
+	if usesConfig {
+		if assert.Equal(t, 1, len(deploy.Spec.Template.Spec.Volumes), "Metrics Deployment should have 1 volume, the configMap. More or less were found.") {
+			assert.Equal(t, configMapKey.Name, deploy.Spec.Template.Spec.Volumes[0].ConfigMap.Name, "Metrics Deployment should have 1 volume, the configMap. More or less were found.")
+		}
+	} else {
+		assert.Equal(t, 0, len(deploy.Spec.Template.Spec.Volumes), "Metrics Deployment should have no volumes, since there is no configMap. Volumes were found.")
+	}
+
+	// Delete the Deployment and expect Reconcile to be called for Deployment deletion
+	g.Expect(c.Delete(context.TODO(), deploy)).NotTo(gomega.HaveOccurred())
+	g.Eventually(requests, timeout).Should(gomega.Receive(gomega.Equal(expectedRequest)))
+	g.Eventually(func() error { return c.Get(context.TODO(), deploymentKey, deploy) }, timeout).
+		Should(gomega.Succeed())
+
+	// Manually delete Deployment since GC isn't enabled in the test control plane
+	g.Eventually(func() error { return c.Delete(context.TODO(), deploy) }, timeout).
+		Should(gomega.MatchError("deployments.apps \"" + deploymentKey.Name + "\" not found"))
+
+	return deploy
+}
+
+func expectService(t *testing.T, g *gomega.GomegaWithT, requests chan reconcile.Request, serviceKey types.NamespacedName, foundDeployment *appsv1.Deployment) {
+	service := &corev1.Service{}
+	g.Eventually(func() error { return c.Get(context.TODO(), serviceKey, service) }, timeout).
+		Should(gomega.Succeed())
+
+	// Verify the Service specs
+	assert.Equal(t, "true", service.Annotations["prometheus.io/scrape"], "Metrics Service Prometheus scraping is not enabled.")
+	assert.Equal(t, foundDeployment.Spec.Template.Labels, service.Spec.Selector, "Metrics Service is not pointing to the correct Pods.")
+
+	// Delete the Service and expect Reconcile to be called for Service deletion
+	g.Expect(c.Delete(context.TODO(), service)).NotTo(gomega.HaveOccurred())
+	g.Eventually(requests, timeout).Should(gomega.Receive(gomega.Equal(expectedRequest)))
+	g.Eventually(func() error { return c.Get(context.TODO(), serviceKey, service) }, timeout).
+		Should(gomega.Succeed())
+
+	// Manually delete Service since GC isn't enabled in the test control plane
+	g.Eventually(func() error { return c.Delete(context.TODO(), service) }, timeout).
+		Should(gomega.MatchError("services \"" + serviceKey.Name + "\" not found"))
+}
diff --git a/pkg/controller/util/prometheus_exporter_util.go b/pkg/controller/util/prometheus_exporter_util.go
new file mode 100644
index 0000000..899985c
--- /dev/null
+++ b/pkg/controller/util/prometheus_exporter_util.go
@@ -0,0 +1,260 @@
+/*
+Copyright 2019 Bloomberg Finance LP.
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+    http://www.apache.org/licenses/LICENSE-2.0
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package util
+
+import (
+	solr "github.com/bloomberg/solr-operator/pkg/apis/solr/v1beta1"
+	appsv1 "k8s.io/api/apps/v1"
+	corev1 "k8s.io/api/core/v1"
+	extv1 "k8s.io/api/extensions/v1beta1"
+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+	"k8s.io/apimachinery/pkg/util/intstr"
+	"reflect"
+	"strconv"
+)
+
+const (
+	SolrMetricsPort        = 8080
+	SolrMetricsPortName    = "solr-metrics"
+	ExtSolrMetricsPort     = 80
+	ExtSolrMetricsPortName = "ext-solr-metrics"
+
+	DefaultPrometheusExporterEntrypoint = "/opt/solr/contrib/prometheus-exporter/bin/solr-exporter"
+)
+
+// SolrConnectionInfo defines how to connect to a cloud or standalone solr instance.
+// One, and only one, of Cloud or Standalone must be provided.
+type SolrConnectionInfo struct {
+	CloudZkConnnectionString string
+	StandaloneAddress        string
+}
+
+// GenerateSolrPrometheusExporterDeployment returns a new appsv1.Deployment pointer generated for the SolrCloud Prometheus Exporter instance
+// solrPrometheusExporter: SolrPrometheusExporter instance
+func GenerateSolrPrometheusExporterDeployment(solrPrometheusExporter *solr.SolrPrometheusExporter, solrConnectionInfo SolrConnectionInfo) *appsv1.Deployment {
+	gracePeriodTerm := int64(10)
+	singleReplica := int32(1)
+	fsGroup := int64(SolrMetricsPort)
+
+	labels := solrPrometheusExporter.SharedLabelsWith(solrPrometheusExporter.GetLabels())
+	selectorLabels := solrPrometheusExporter.SharedLabels()
+
+	labels["technology"] = solr.SolrPrometheusExporterTechnologyLabel
+	selectorLabels["technology"] = solr.SolrPrometheusExporterTechnologyLabel
+
+	var solrVolumes []corev1.Volume
+	var volumeMounts []corev1.VolumeMount
+	exporterArgs := []string{
+		"-p", strconv.Itoa(SolrMetricsPort),
+		"-n", strconv.Itoa(int(solrPrometheusExporter.Spec.NumThreads)),
+	}
+
+	if solrPrometheusExporter.Spec.ScrapeInterval > 0 {
+		exporterArgs = append(exporterArgs, "-s", strconv.Itoa(int(solrPrometheusExporter.Spec.ScrapeInterval)))
+	}
+
+	// Setup the solrConnectionInfo
+	if solrConnectionInfo.CloudZkConnnectionString != "" {
+		exporterArgs = append(exporterArgs, "-z", solrConnectionInfo.CloudZkConnnectionString)
+	} else if solrConnectionInfo.StandaloneAddress != "" {
+		exporterArgs = append(exporterArgs, "-b", solrConnectionInfo.StandaloneAddress)
+	}
+
+	// Only add the config if it is passed in from the user. Otherwise, use the default.
+	if solrPrometheusExporter.Spec.Config != "" {
+		solrVolumes = []corev1.Volume{{
+			Name: "solr-prometheus-exporter-xml",
+			VolumeSource: corev1.VolumeSource{
+				ConfigMap: &corev1.ConfigMapVolumeSource{
+					LocalObjectReference: corev1.LocalObjectReference{
+						Name: solrPrometheusExporter.MetricsConfigMapName(),
+					},
+					Items: []corev1.KeyToPath{
+						{
+							Key:  "solr-prometheus-exporter.xml",
+							Path: "solr-prometheus-exporter.xml",
+						},
+					},
+				},
+			},
+		}}
+
+		volumeMounts = []corev1.VolumeMount{{Name: "solr-prometheus-exporter-xml", MountPath: "/opt/solr-exporter", ReadOnly: true}}
+
+		exporterArgs = append(exporterArgs, "-f", "/opt/solr-exporter/solr-prometheus-exporter.xml")
+	} else {
+		exporterArgs = append(exporterArgs, "-f", "/opt/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml")
+	}
+
+	entrypoint := DefaultPrometheusExporterEntrypoint
+	if solrPrometheusExporter.Spec.ExporterEntrypoint != "" {
+		entrypoint = solrPrometheusExporter.Spec.ExporterEntrypoint
+	}
+
+	deployment := &appsv1.Deployment{
+		ObjectMeta: metav1.ObjectMeta{
+			Name:      solrPrometheusExporter.MetricsDeploymentName(),
+			Namespace: solrPrometheusExporter.GetNamespace(),
+			Labels:    labels,
+		},
+		Spec: appsv1.DeploymentSpec{
+			Selector: &metav1.LabelSelector{
+				MatchLabels: selectorLabels,
+			},
+			Replicas: &singleReplica,
+			Template: corev1.PodTemplateSpec{
+				ObjectMeta: metav1.ObjectMeta{
+					Labels: labels,
+				},
+				Spec: corev1.PodSpec{
+					TerminationGracePeriodSeconds: &gracePeriodTerm,
+					SecurityContext: &corev1.PodSecurityContext{
+						FSGroup: &fsGroup,
+					},
+					Volumes: solrVolumes,
+					Containers: []corev1.Container{
+						{
+							Name:            "solr-prometheus-exporter",
+							Image:           solrPrometheusExporter.Spec.Image.ToImageName(),
+							ImagePullPolicy: solrPrometheusExporter.Spec.Image.PullPolicy,
+							Ports:           []corev1.ContainerPort{{ContainerPort: SolrMetricsPort, Name: SolrMetricsPortName}},
+							VolumeMounts:    volumeMounts,
+							Command:         []string{entrypoint},
+							Args:            exporterArgs,
+
+							LivenessProbe: &corev1.Probe{
+								InitialDelaySeconds: 20,
+								PeriodSeconds:       10,
+								Handler: corev1.Handler{
+									HTTPGet: &corev1.HTTPGetAction{
+										Scheme: corev1.URISchemeHTTP,
+										Path:   "/metrics",
+										Port:   intstr.FromInt(SolrMetricsPort),
+									},
+								},
+							},
+						},
+					},
+				},
+			},
+		},
+	}
+	return deployment
+}
+
+// GenerateConfigMap returns a new corev1.ConfigMap pointer generated for the Solr Prometheus Exporter instance solr-prometheus-exporter.xml
+// solrPrometheusExporter: SolrPrometheusExporter instance
+func GenerateMetricsConfigMap(solrPrometheusExporter *solr.SolrPrometheusExporter) *corev1.ConfigMap {
+	labels := solrPrometheusExporter.SharedLabelsWith(solrPrometheusExporter.GetLabels())
+
+	configMap := &corev1.ConfigMap{
+		ObjectMeta: metav1.ObjectMeta{
+			Name:      solrPrometheusExporter.MetricsConfigMapName(),
+			Namespace: solrPrometheusExporter.GetNamespace(),
+			Labels:    labels,
+		},
+		Data: map[string]string{
+			"solr-prometheus-exporter.xml": solrPrometheusExporter.Spec.Config,
+		},
+	}
+	return configMap
+}
+
+// CopyConfigMapFields copies the owned fields from one ConfigMap to another
+func CopyMetricsConfigMapFields(from, to *corev1.ConfigMap) bool {
+	requireUpdate := false
+	for k, v := range from.Labels {
+		if to.Labels[k] != v {
+			requireUpdate = true
+		}
+		to.Labels[k] = v
+	}
+
+	for k, v := range from.Annotations {
+		if to.Annotations[k] != v {
+			requireUpdate = true
+		}
+		to.Annotations[k] = v
+	}
+
+	// Don't copy the entire Spec, because we can't overwrite the clusterIp field
+
+	if !reflect.DeepEqual(to.Data, from.Data) {
+		requireUpdate = true
+	}
+	to.Data = from.Data
+
+	return requireUpdate
+}
+
+// GenerateSolrMetricsService returns a new corev1.Service pointer generated for the SolrCloud Prometheus Exporter deployment
+// Metrics will be collected on this service endpoint, as we don't want to double-tick data if multiple exporters are runnning.
+// solrPrometheusExporter: solrPrometheusExporter instance
+func GenerateSolrMetricsService(solrPrometheusExporter *solr.SolrPrometheusExporter) *corev1.Service {
+	copyLabels := solrPrometheusExporter.GetLabels()
+	if copyLabels == nil {
+		copyLabels = map[string]string{}
+	}
+	labels := solrPrometheusExporter.SharedLabelsWith(solrPrometheusExporter.GetLabels())
+	labels["service-type"] = "metrics"
+
+	selectorLabels := solrPrometheusExporter.SharedLabels()
+	selectorLabels["technology"] = solr.SolrPrometheusExporterTechnologyLabel
+
+	service := &corev1.Service{
+		ObjectMeta: metav1.ObjectMeta{
+			Name:      solrPrometheusExporter.MetricsServiceName(),
+			Namespace: solrPrometheusExporter.GetNamespace(),
+			Labels:    labels,
+			Annotations: map[string]string{
+				"prometheus.io/scrape": "true",
+				"prometheus.io/scheme": "http",
+				"prometheus.io/path":   "/metrics",
+				"prometheus.io/port":   strconv.Itoa(ExtSolrMetricsPort),
+			},
+		},
+		Spec: corev1.ServiceSpec{
+			Ports: []corev1.ServicePort{
+				{Name: ExtSolrMetricsPortName, Port: ExtSolrMetricsPort, Protocol: corev1.ProtocolTCP, TargetPort: intstr.FromInt(SolrMetricsPort)},
+			},
+			Selector: selectorLabels,
+		},
+	}
+	return service
+}
+
+// CreateMetricsIngressRule returns a new Ingress Rule generated for the solr metrics endpoint
+// This is not currently used, as an ingress is not created for the metrics endpoint.
+
+// solrCloud: SolrCloud instance
+// nodeName: string Name of the node
+// ingressBaseDomain: string base domain for the ingress controller
+func CreateMetricsIngressRule(solrPrometheusExporter *solr.SolrPrometheusExporter, ingressBaseDomain string) extv1.IngressRule {
+	externalAddress := solrPrometheusExporter.MetricsIngressUrl(ingressBaseDomain)
+	return extv1.IngressRule{
+		Host: externalAddress,
+		IngressRuleValue: extv1.IngressRuleValue{
+			HTTP: &extv1.HTTPIngressRuleValue{
+				Paths: []extv1.HTTPIngressPath{
+					{
+						Backend: extv1.IngressBackend{
+							ServiceName: solrPrometheusExporter.MetricsServiceName(),
+							ServicePort: intstr.FromInt(ExtSolrMetricsPort),
+						},
+					},
+				},
+			},
+		},
+	}
+}
diff --git a/pkg/controller/util/solr_util.go b/pkg/controller/util/solr_util.go
index 7ddd594..37f017a 100644
--- a/pkg/controller/util/solr_util.go
+++ b/pkg/controller/util/solr_util.go
@@ -77,7 +77,7 @@
 
 	solrDataVolumeName := "data"
 	volumeMounts := []corev1.VolumeMount{{Name: solrDataVolumeName, MountPath: "/var/solr/data"}}
-	pvcs := []corev1.PersistentVolumeClaim(nil)
+	var pvcs []corev1.PersistentVolumeClaim
 	if solrCloud.Spec.DataPvcSpec != nil {
 		pvcs = []corev1.PersistentVolumeClaim{
 			{
diff --git a/pkg/controller/util/zk_util.go b/pkg/controller/util/zk_util.go
index 1c3febf..8f6c175 100644
--- a/pkg/controller/util/zk_util.go
+++ b/pkg/controller/util/zk_util.go
@@ -205,19 +205,19 @@
 // Returns true if the fields copied from don't match to.
 func CopyDeploymentFields(from, to *appsv1.Deployment) bool {
 	requireUpdate := false
-	for k, v := range to.Labels {
-		if from.Labels[k] != v {
+	for k, v := range from.Labels {
+		if to.Labels[k] != v {
 			requireUpdate = true
 		}
+		to.Labels[k] = v
 	}
-	to.Labels = from.Labels
 
-	for k, v := range to.Annotations {
-		if from.Annotations[k] != v {
+	for k, v := range from.Annotations {
+		if to.Annotations[k] != v {
 			requireUpdate = true
 		}
+		to.Annotations[k] = v
 	}
-	to.Annotations = from.Annotations
 
 	if !reflect.DeepEqual(to.Spec.Replicas, from.Spec.Replicas) {
 		requireUpdate = true
@@ -229,9 +229,51 @@
 		to.Spec.Selector = from.Spec.Selector
 	}
 
-	if !reflect.DeepEqual(to.Spec.Template, from.Spec.Template) {
+	if !reflect.DeepEqual(to.Spec.Template.Labels, from.Spec.Template.Labels) {
 		requireUpdate = true
-		to.Spec.Template = from.Spec.Template
+		to.Spec.Template.Labels = from.Spec.Template.Labels
+	}
+
+	if !reflect.DeepEqual(to.Spec.Template.Spec.Volumes, from.Spec.Template.Spec.Volumes) {
+		requireUpdate = true
+		to.Spec.Template.Spec.Volumes = from.Spec.Template.Spec.Volumes
+	}
+
+	if len(to.Spec.Template.Spec.Containers) != len(from.Spec.Template.Spec.Containers) {
+		requireUpdate = true
+		to.Spec.Template.Spec.Containers = from.Spec.Template.Spec.Containers
+	} else if !reflect.DeepEqual(to.Spec.Template.Spec.Containers, from.Spec.Template.Spec.Containers) {
+		for i := 0; i < len(to.Spec.Template.Spec.Containers); i++ {
+			if !reflect.DeepEqual(to.Spec.Template.Spec.Containers[i].Name, from.Spec.Template.Spec.Containers[i].Name) {
+				requireUpdate = true
+				to.Spec.Template.Spec.Containers[i].Name = from.Spec.Template.Spec.Containers[i].Name
+			}
+
+			if !reflect.DeepEqual(to.Spec.Template.Spec.Containers[i].Image, from.Spec.Template.Spec.Containers[i].Image) {
+				requireUpdate = true
+				to.Spec.Template.Spec.Containers[i].Image = from.Spec.Template.Spec.Containers[i].Image
+			}
+
+			if !reflect.DeepEqual(to.Spec.Template.Spec.Containers[i].ImagePullPolicy, from.Spec.Template.Spec.Containers[i].ImagePullPolicy) {
+				requireUpdate = true
+				to.Spec.Template.Spec.Containers[i].ImagePullPolicy = from.Spec.Template.Spec.Containers[i].ImagePullPolicy
+			}
+
+			if !reflect.DeepEqual(to.Spec.Template.Spec.Containers[i].Command, from.Spec.Template.Spec.Containers[i].Command) {
+				requireUpdate = true
+				to.Spec.Template.Spec.Containers[i].Command = from.Spec.Template.Spec.Containers[i].Command
+			}
+
+			if !reflect.DeepEqual(to.Spec.Template.Spec.Containers[i].Args, from.Spec.Template.Spec.Containers[i].Args) {
+				requireUpdate = true
+				to.Spec.Template.Spec.Containers[i].Args = from.Spec.Template.Spec.Containers[i].Args
+			}
+
+			if !reflect.DeepEqual(to.Spec.Template.Spec.Containers[i].Env, from.Spec.Template.Spec.Containers[i].Env) {
+				requireUpdate = true
+				to.Spec.Template.Spec.Containers[i].Env = from.Spec.Template.Spec.Containers[i].Env
+			}
+		}
 	}
 
 	return requireUpdate