Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slicer gives values outside of prior #180

Open
alebyk113 opened this issue Aug 21, 2018 · 4 comments
Open

Slicer gives values outside of prior #180

alebyk113 opened this issue Aug 21, 2018 · 4 comments

Comments

@alebyk113
Copy link

I am currently having a similar issue to issue #43. I am using Slicer sampling for parameters with Uniform and DiscreteUniform distributions:

parameter1 = pm.Uniform('parameter1',0.01,1)
parameter2 = pm.Uniform('parameter2',0,2)
parameter3 = pm.DiscreteUniform('parameter3',1,10)
parameter4 = pm.Uniform('parameter4',0,1.75)
parameter5 = pm.Uniform('parameter5', 0.005, 0.25)
parameter6 = pm.Uniform('parameter6', 0.005, 0.15)

@pm.potential
def log_l(experiment=experiment,parameter1=parameter1,parameter2=parameter2,parameter3=parameter3,parameter4=parameter4,parameter5=parameter5,parameter6=parameter6)    
parameters=[parameter1, parameter2, parameter3]
    log_l=calculate_probability(parameters, t_m, tol, n_integration, parameter4, parameter5, parameter6, experiment.decon_all[2,:,:])

    return log_l
model = pm.MCMC([parameter1,parameter2,parameter3,parameter4,parameter5,parameter6,log_l])
model.use_step_method(pm.Slicer, parameter1, w=0.05)
model.use_step_method(pm.Slicer, parameter2, w=0.05)
model.use_step_method(pm.Slicer, parameter3, w=1)
model.use_step_method(pm.Slicer, parameter4, w=0.1)
model.use_step_method(pm.Slicer, parameter5, w=0.025)
model.use_step_method(pm.Slicer, parameter6, w=0.025)
model.sample(100)

I was wondering if I am potentially doing something wrong or if that's a real issue?

@fonnesbeck
Copy link
Member

fonnesbeck commented Aug 21, 2018

I'm not sure what your factor potential is for (it also generates a syntax error), ans what calculate_probability is. Also, please ensure that your starting values are within the support of all the corresponding variables.

Have you tried this in PyMC3? It also has a slice sampler, and we are encouraging users to move to it instead of continuing with PyMC2, unless there is an overriding reason not to.

@alebyk113
Copy link
Author

Thanks for your reply and sorry, I should have described this in the initial post. The calculate_probability function is a custom function that takes the observed data, contained in experiment object, and returns the log likelihood value given the parameter values. I added the @potential decorator as I thought that's how custom likelihood functions should be handled, is that not the best way of handling this? Could this be the reason why the sampling is generating values outside of the prior?

I initially tried setting this up in PyMC3 but I had issues converting the theano objects into floats and integers to pass to calculate_probability function. That function was written using numpy and scipy, it's quite a big programme and it would take a long time to change to handle theano objects.

@fonnesbeck
Copy link
Member

Its possible, yes.

If you do want to give PyMC3 another go, you can write non-Theano functions and add them to models using the theano.as_op decorator. This does not allow for gradients to be calculated, but if you are using slice sampling or metropolis, then its not an issue.

@alebyk113
Copy link
Author

I had a go at using as_op, this is what I had in the end:

@t.compile.ops.as_op(itypes=[t.dscalar, t.dscalar, t.lscalar,t.dscalar,t.dscalar,t.dscalar,t.dmatrix ],otypes=[t.dscalar])
def test(parameter1,parameter2,parameter3,parameter4,parameter5,parameter6,data):
    log_l=calculate_probability(parameters, np.diff(experiment.spike_t), 0.001, 100, parameter4, parameter5, parameter6, data)
    return log_l

with pm.Model() as model:
        parameter1 = pm.Uniform('parameter1',0.01,1.)
        parameter2 = pm.Uniform('parameter2',0.,2.)
        parameter3 = pm.DiscreteUniform('parameter3',0.,20.)
        parameter4 = pm.Uniform('parameter4',0.,1.75)
        parameter5 = pm.Uniform('parameter5', 0.005, 0.25)
        parameter6 = pm.Uniform('parameter6', 0.005, 0.15)

       def logp(data):
           return test(parameter1,parameter2,parameter3,parameter4,parameter5,parameter6,data)

      like = pm.DensityDist('like', logp, observed = data)
      trace = pm.sample(100, pm.Slice())

And the programme generates ValueError: expected an ndarray error, I had no clue what it was referring to and whether I haven't built the model properly, I was wondering if you see any glaring mistakes in this set-up?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants