Search Results

Search found 25570 results on 1023 pages for 'low level api'.

Page 255/1023 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Trouble installing php memcache extension

    - by 2020vert
    I'm trying to install memcache on MAMP but I get the warning below, and when I continue it seems to complete properly. I add the line extension=memcache.so to the php.ini and restart MAMP but phpinfo() doesn't list the memcache extension. $ ./pecl install memcache downloading memcache-2.2.5.tgz ... Starting to download memcache-2.2.5.tgz (35,981 bytes) ..........done: 35,981 bytes 11 source files, building WARNING: php_bin /Applications/MAMP/bin/php5/bin/php appears to have a suffix 5/bin/php, but config variable php_suffix does not match running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 Enable memcache session handler support? [yes] : yes ... Build process completed successfully Installing '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.5 configuration option "php_ini" is not set to php.ini location You should add "extension=memcache.so" to php.ini

    Read the article

  • Ignoring GET parameters in Varnish VCL

    - by JamesHarrison
    Okay: I've got a site set up which has some APIs we expose to developers, which are in the format /api/item.xml?type_ids=34,35,37&region_ids=1000002,1000003&key=SOMERANDOMALPHANUM In this URI, type_ids is always set, region_ids and key are optional. The important thing to note is that the key variable does not affect the content of the response. It is used for internal tracking of requests so we can identify people who make slow or otherwise unwanted requests. In Varnish, we have a VCL like this: if (req.http.host ~ "the-site-in-question.com") { if (req.url ~ "^/api/.+\.xml") { unset req.http.cookie; } } We just strip cookies out and let the backend do the rest as far as times are concerned (this is a hackaround since Rails/authlogic sends session cookies with API responses). At present though, any distinct developers are basically hitting different caches since &key=SOMEALPHANUM is considered as part of the Varnish hash for storage. This is obviously not a great solution and I'm trying to work out how to tell Varnish to ignore that part of the URI.

    Read the article

  • Trouble installing php memcache extension

    - by user35346
    I'm trying to install memcache on MAMP but I get the warning below, and when I continue it seems to complete properly. I add the line extension=memcache.so to the php.ini and restart MAMP but phpinfo() doesn't list the memcache extension. $ ./pecl install memcache downloading memcache-2.2.5.tgz ... Starting to download memcache-2.2.5.tgz (35,981 bytes) ..........done: 35,981 bytes 11 source files, building WARNING: php_bin /Applications/MAMP/bin/php5/bin/php appears to have a suffix 5/bin/php, but config variable php_suffix does not match running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 Enable memcache session handler support? [yes] : yes ... Build process completed successfully Installing '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.5 configuration option "php_ini" is not set to php.ini location You should add "extension=memcache.so" to php.ini

    Read the article

  • Multi-Domain Root Administrator

    - by Brent Pabst
    We have a new domain structure we are planning on rolling out in the next few months. Essentially there is a single top level and forest domain controller "mydomain.lan" and two children "us.mydomain.lan" and "pl.mydomain.lan". We want to configure an administrator account or two at the top level domain that then has full administrator permissions on the sub domains. By default the top level administrator cannot access or login to machines on the sub-domains. Running W2K8R2. Ideas?

    Read the article

  • How to recursively move all files (including hidden) in a subfolder into a parent folder in *nix?

    - by deadprogrammer
    This is a bit of an embracing question, but I have to admit that this late in my career I still have questions about the mv command. I frequently have this problem: I need to move all files recursively up one level. Let's say I have folder foo, and a folder bar inside it. Bar has a mess of files and folders, including dot files and folders. How do I move everything in bar to the foo level? If foo is empty, I simply move bar one level above, delete foo and rename bar into foo. Part of the problem is that I can't figure out what mv's wildcard for "everything including dots" is. A part of this question is this - is there an in-depth discussion of the wildcards that cp and mv commands use somewhere (googling this only brings very basic tutorials).

    Read the article

  • How to install pecl uploadprogress on Debian Lenny

    - by kidrobot
    I am getting this output/error for # pecl install uploadprogress downloading uploadprogress-1.0.1.tgz ... Starting to download uploadprogress-1.0.1.tgz (8,536 bytes) .....done: 8,536 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 building in /var/tmp/pear-build-root/uploadprogress-1.0.1 running: /tmp/pear/temp/uploadprogress/configure checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for gcc... no checking for cc... no checking for cl.exe... no configure: error: no acceptable C compiler found in $PATH See `config.log' for more details. ERROR: `/tmp/pear/temp/uploadprogress/configure' failed php-pear is installed. I'm stumped.

    Read the article

  • Cannot install Pecl (Imagick) extension on Centos server - autoconf missing

    - by Stevo
    I'm trying to install the pecl extension Imagick on a centos server, but I'm getting an error about autoconf. Autoconf is installed, as is make and gcc. but it's complaining about the path: [root@server ~]# pecl install imagick downloading imagick-3.0.1.tgz ... Starting to download imagick-3.0.1.tgz (93,920 bytes) .....................done: 93,920 bytes 13 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 /usr/bin/phpize: /var/tmp/imagick/build/shtool: /bin/sh: bad interpreter: Permission denied Cannot find autoconf. Please check your autoconf installation and the $PHP_AUTOCONF environment variable. Then, rerun this script. ERROR: `phpize' failed What should I do?

    Read the article

  • Scrolling mouse sets windows sound volume

    - by Ikke
    Suddenly, when I use the scroll wheel of my mouse, it changes the windows sound volume level. I have a HP DV6-2030SD laptop with Windows 7 64 bit and a Zolid p50622 mouse. When I use the scroll function of the mouse pad of the laptop, it does not adjust the sound level. I don't have any special mouse drivers installed, just the standard windows drivers. When I scroll, I see the HP sound level screen: It doesn't do it always, but when it does, it prevents the current window to scroll, which is really annoying. Rebooting doesn't help. I've tried to put the USB dongle in a different port, but this doesn't help either. Any advise on how I can fix this?

    Read the article

  • MySQL Cluster Failover doesn't work

    - by Lukasz
    I have two servers, where First server 10.100.15.150: 1. one mgm server 2. one ndbd 3. one mysql api Second server 10.100.15.160: 1. one ndbd 2. one mysql api When i start all 'parts' of cluster it looks : Cluster Configuration [ndbd(NDB)] 2 node(s) id=21 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17, Nodegroup: 0) id=22 @10.100.15.160 (mysql-5.1.56 ndb-7.1.17, Nodegroup: 0, Master) [ndb_mgmd(MGM)] 1 node(s) id=3 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17) [mysqld(API)] 2 node(s) id=11 @10.100.15.150 (mysql-5.1.56 ndb-7.1.17) id=12 @10.100.15.160 (mysql-5.1.56 ndb-7.1.17) When i shutdown first machine - 10.100.15.150, on second the nbdb process also has been shutdown so i cannot use this data node and cluster fail ... How i must configure this cluster to get FailOver working ? Thx

    Read the article

  • How to recursively move all files (including hidden) in a subfolder into a parent folder in *nix?

    - by deadprogrammer
    This is a bit of an embarrassing question, but I have to admit that this late in my career I still have questions about the mv command. I frequently have this problem: I need to move all files recursively up one level. Let's say I have folder foo, and a folder bar inside it. Bar has a mess of files and folders, including dot files and folders. How do I move everything in bar to the foo level? If foo is empty, I simply move bar one level above, delete foo and rename bar into foo. Part of the problem is that I can't figure out what mv's wildcard for "everything including dots" is. A part of this question is this - is there an in-depth discussion of the wildcards that cp and mv commands use somewhere (googling this only brings very basic tutorials).

    Read the article

  • What happens if I run caspol.exe multiple times?

    - by Maclovin
    Hi there! Caspol.exe is used to modify security policy for the machine policy level, the user policy level, and the enterprise policy level. What I use it for, is setting up av trust between the client and an area on some server. I went through the scripts on the server, and found an interesting script that sets up full trust via caspol between a client in one zone, and an application on the server. That script has been running every day, for every logon, since it was implemented. Can someone tell me the consequences? I guess there is about 500 trusts between the client computer and the server, all which points to the same thing.

    Read the article

  • Network monitoring library, or objects, for a cloud

    - by Andrew Smith
    I am looking for library to support server / switch monitoring, to actually be able to check with the device if it's working OK. However this requires some sort of auto-detection and device support. Basically I need to automatically detect a new device, start monitoring it like CPU and PING. So how do I auto-detect the machine remotely, this is something I need library for. Rackspace has something like this - "Cloud Monitoring API". But is there anything opensource which can be used same way? The Nagios and others doesnt have such API, and the big and expensive systems are too big to handle in public cloud, so there must be some other network monitoring engine with API, which can add a new servers automatically and support user isolation for example so I dont see other servers except mine.

    Read the article

  • just another apache to nginx rewrite question

    - by Brandon
    I have the following Apache rewrite directives: RewriteCond %{REQUEST_URI} ^/proxy(/|$) [NC] RewriteCond %{QUERY_STRING} (^|&)uri=(.*?)(&|$) [NC] RewriteRule .* /api/vs1.0/%2 [NC,L] And I'm trying out nginx, so trying to move the rewrites over. I came up with... rewrite ^/proxy(/|$) /api/vs1.0/$2 last; rewrite (^|&)uri=(.*?)(&|$) /api/vs1.0/$2 last; Which is probably grossly incorrect. I'm just a mere web developer, so I was wondering if anyone could lend a hand here. I would be much obliged. I see that I am ignoring the query string specification, but I'm thinking that it shouldn't matter. I only have a vague idea of what the original rewrite is accomplishing, so I haven't much hope here in coming up with something decent, despite reading the relevant documentation for both servers.

    Read the article

  • Raid1+0: create stripe over two /dev/mdx on partition or not?

    - by Chris
    Given that I haven't found a way to define how a Raid10 is created with mdadm, i went the Raid1+0 solution. How to display/define Mirror/Stripping pairs with mdadm mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdf1 mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdg1 /dev/sdh1 mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/md0 /dev/md1 My question is about the stripe. For the mirror I create a primary partition over the full HD and set partition type to FD. So, should I do the same for the Stripe? Create partition on /dev/md0 and /dev/md1 (primary over full 'HDD', set partition type correctly) and then do the stripe on the partition? Is there a correct way here or are there any advantages/disadvantages to a solution? Thank you

    Read the article

  • For a particular domain, how can I cache its JSON responses locally?

    - by Chris
    I'm coding the frontend of a web app that uses XHR to grab JSON data from a 3rd party. The 3rd party service is slow and because of its API design, we need to make a LOT of API requests every time I refresh the page to test some new code. It's making the development loop painful. The requests are GETs, POSTs and PUTs even though I'm pretty sure none of the requests are changing state. I want to go to localhost for the JSON rather than to this 3rd party API - simply to make my development process faster.

    Read the article

  • C++ custom exceptions: run time performance and passing exceptions from C++ to C

    - by skyeagle
    I am writing a custom C++ exception class (so I can pass exceptions occuring in C++ to another language via a C API). My initial plan of attack was to proceed as follows: //C++ myClass { public: myClass(); ~myClass(); void foo() // throws myException int foo(const int i, const bool b) // throws myException } * myClassPtr; // C API #ifdef __cplusplus extern "C" { #endif myClassPtr MyClass_New(); void MyClass_Destroy(myClassPtr p); void MyClass_Foo(myClassPtr p); int MyClass_FooBar(myClassPtr p, int i, bool b); #ifdef __cplusplus }; #endif I need a way to be able to pass exceptions thrown in the C++ code to the C side. The information I want to pass to the C side is the following: (a). What (b). Where (c). Simple Stack Trace (just the sequence of error messages in order they occured, no debugging info etc) I want to modify my C API, so that the API functions take a pointer to a struct ExceptionInfo, which will contain any exception info (if an exception occured) before consuming the results of the invocation. This raises two questions: Question 1 1. Implementation of each of the C++ methods exposed in the C API needs to be enclosed in a try/catch statement. The performance implications for this seem quite serious (according to this article): "It is a mistake (with high runtime cost) to use C++ exception handling for events that occur frequently, or for events that are handled near the point of detection." At the same time, I remember reading somewhere in my C++ days, that all though exception handling is expensive, it only becmes expensive when an exception actually occurs. So, which is correct?. what to do?. Is there an alternative way that I can trap errors safely and pass the resulting error info to the C API?. Or is this a minor consideration (the article after all, is quite old, and hardware have improved a bit since then). Question 2 I wuld like to modify the exception class given in that article, so that it contains a simple stack trace, and I need some help doing that. Again, in order to make the exception class 'lightweight', I think its a good idea not to include any STL classes, like string or vector (good idea/bad idea?). Which potentially leaves me with a fixed length C string (char*) which will be stack allocated. So I can maybe just keep appending messages (delimted by a unique separator [up to maximum length of buffer])... Its been a while since I did any serious C++ coding, and I will be grateful for the help. BTW, this is what I have come up with so far (I am intentionally, not deriving from std::exception because of the performance reasons mentioned in the article, and I am instead, throwing an integral exception (based on an exception enumeration): class fast_exception { public: fast_exception(int what, char const* file=0, int line=0) : what_(what), line_(line), file_(file) {/*empty*/} int what() const { return what_; } int line() const { return line_; } char const* file() const { return file_; } private: int what_; int line_; char const[MAX_BUFFER_SIZE] file_; }

    Read the article

  • TableView Cells Use Whole Screen Height

    - by Kyle
    I read through this tutorial Appcelerator: Using JSON to Build a Twitter Client and attempted to create my own simple application to interact with a Jetty server I setup running some Spring code. I basically call a get http request that gives me a bunch of contacts in JSON format. I then populate several rows with my JSON data and try to build a TableView. All of that works, however, my tableView rows take up the whole screen. Each row is one screen. I can scroll up and down and see all my data, but I'm trying to figure out what's wrong in my styling that's making the cells use the whole screen. My CSS is not great, so any help is appreciated. Thanks! Here's my js file that's loading the tableView: // create variable "win" to refer to current window var win = Titanium.UI.currentWindow; // Function loadContacts() function loadContacts() { // empty array "rowData" for table view cells var rowData = []; // create http client var loader = Titanium.Network.createHTTPClient(); // set http request method and url loader.setRequestHeader("Accept", "application/json"); loader.open("GET", "http://localhost:8080/contactsample/contacts"); // run the function when the data is ready for us to process loader.onload = function(){ Ti.API.debug("JSON Data: " + this.responseText); // evaluate json var contacts = JSON.parse(this.responseText); for(var i=0; i < contacts.length; i++) { var id = contacts[i].id; Ti.API.info("JSON Data, Row[" + i + "], ID: " + contacts[i].id); var name = contacts[i].name; Ti.API.info("JSON Data, Row[" + i + "], Name: " + contacts[i].name); var phone = contacts[i].phone; Ti.API.info("JSON Data, Row[" + i + "], Phone: " + contacts[i].phone); var address = contacts[i].address; Ti.API.info("JSON Data, Row[" + i + "], Address: " + contacts[i].address); // create row var row = Titanium.UI.createTableViewRow({ height:'auto' }); // create row's view var contactView = Titanium.UI.createView({ height:'auto', layout:'vertical', top:5, right:5, bottom:5, left:5 }); var nameLbl = Titanium.UI.createLabel({ text:name, left:5, height:24, width:236, textAlign:'left', color:'#444444', font:{ fontFamily:'Trebuchet MS', fontSize:16, fontWeight:'bold' } }); var phoneLbl = Titanium.UI.createLabel({ text: phone, top:0, bottom:2, height:'auto', width:236, textAlign:'right', font:{ fontSize:14} }); var addressLbl = Titanium.UI.createLabel({ text: address, top:0, bottom:2, height:'auto', width:236, textAlign:'right', font:{ fontSize:14} }); contactView.add(nameLbl); contactView.add(phoneLbl); contactView.add(addressLbl); row.add(contactView); row.className = "item" + i; rowData.push(row); } Ti.API.info("RowData: " + rowData); // create table view var tableView = Titanium.UI.createTableView( { data: rowData }); win.add(tableView); }; // send request loader.send(); } // get contacts loadContacts(); And here are some screens showing my problem. I tried playing with the top, bottom, right, left pixels a bit and didn't seem to be getting anywhere. All help is greatly appreciated. Thanks!

    Read the article

  • Using JUnit with App Engine and Eclipse

    - by Mark M
    I am having trouble setting up JUnit with App Engine in Eclipse. I have JUnit set up correctly, that is, I can run tests that don't involve the datastore or other services. However, when I try to use the datastore in my tests they fail. The code I am trying right now is from the App Engine site (see below): http://code.google.com/appengine/docs/java/tools/localunittesting.html#Running_Tests So far I have added the external JAR (using Eclipse) appengine-testing.jar. But when I run the tests I get the exception below. So, I am clearly not understanding the instructions to enable the services from the web page mentioned above. Can someone clear up the steps needed to make the App Engine services available in Eclipse? java.lang.NoClassDefFoundError: com/google/appengine/api/datastore/dev/LocalDatastoreService at com.google.appengine.tools.development.testing.LocalDatastoreServiceTestConfig.tearDown(LocalDatastoreServiceTestConfig.java:138) at com.google.appengine.tools.development.testing.LocalServiceTestHelper.tearDown(LocalServiceTestHelper.java:254) at com.cooperconrad.server.MemberTest.tearDown(MemberTest.java:28) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:73) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:46) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:180) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:41) at org.junit.runners.ParentRunner$1.evaluate(ParentRunner.java:173) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) at org.junit.runners.ParentRunner.run(ParentRunner.java:220) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:46) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) Caused by: java.lang.ClassNotFoundException: com.google.appengine.api.datastore.dev.LocalDatastoreService at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 25 more Here is the actual code (pretty much copied from the site): package com.example; import static org.junit.Assert.*; import org.junit.After; import org.junit.Before; import org.junit.Test; import com.google.appengine.api.datastore.DatastoreService; import com.google.appengine.api.datastore.DatastoreServiceFactory; import com.google.appengine.api.datastore.Entity; import com.google.appengine.api.datastore.Query; import com.google.appengine.tools.development.testing.LocalDatastoreServiceTestConfig; import com.google.appengine.tools.development.testing.LocalServiceTestHelper; public class MemberTest { private final LocalServiceTestHelper helper = new LocalServiceTestHelper(new LocalDatastoreServiceTestConfig()); @Before public void setUp() { helper.setUp(); } @After public void tearDown() { helper.tearDown(); } // run this test twice to prove we're not leaking any state across tests private void doTest() { DatastoreService ds = DatastoreServiceFactory.getDatastoreService(); assertEquals(0, ds.prepare(new Query("yam")).countEntities()); ds.put(new Entity("yam")); ds.put(new Entity("yam")); assertEquals(2, ds.prepare(new Query("yam")).countEntities()); } @Test public void testInsert1() { doTest(); } @Test public void testInsert2() { doTest(); } @Test public void foo() { assertEquals(4, 2 + 2); } }

    Read the article

  • FaceBook Login Problem

    - by toman
    Hi All, I am for an application which extracts information from facebook search, hence i require to login facebook. i have registered my application in facebook developers site and have got api key and secret key. in my code i am getting an exception when i am trying to login. Here is my code for login to facebook: import com.facebook.api.FacebookRestClient; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.HttpState; import org.apache.commons.httpclient.NameValuePair; import org.apache.commons.httpclient.methods.GetMethod; import org.apache.commons.httpclient.methods.PostMethod; import org.apache.commons.httpclient.params.HttpClientParams; public class FaceLogin { public FaceLogin(){ getUserID("username", "password"); } private static void getUserID(String email, String password) { String session = null; try { HttpClient http = new HttpClient(); http.setParams(new HttpClientParams()); //http.getHostConfiguration().setHost("http://www.facebook.com/"); http.setState(new HttpState()); String api_key = "****some key****"; String secret = "****some key****"; FacebookRestClient client = new FacebookRestClient(api_key, secret); client.setIsDesktop(true); String token = client.auth_createToken(); final String loginId = "http://www.facebook.com/login.php"; GetMethod get = new GetMethod(loginId + "?api_key=" + api_key + "&v=1.0&auth_token=" +token); System.out.println("Get="+get); http.executeMethod(get); PostMethod post = new PostMethod(loginId); post.addParameter(new NameValuePair("api_key", api_key)); post.addParameter(new NameValuePair("v", "1.0")); post.addParameter(new NameValuePair("auth_token", token)); post.addParameter(new NameValuePair("fbconnect","true")); post.addParameter(new NameValuePair("return_session","true")); post.addParameter(new NameValuePair("session_key_only","true")); post.addParameter(new NameValuePair("req_perms","read_stream,publish_stream")); post.addParameter(new NameValuePair("lsd","8HYdi")); post.addParameter(new NameValuePair("locale","en_US")); post.addParameter(new NameValuePair("persistent","1")); post.addParameter(new NameValuePair("email", email)); post.addParameter(new NameValuePair("pass", password)); System.out.println("Token ="+token); int postStatus = http.executeMethod(post); System.out.println("Response : " + postStatus); session = client.auth_getSession(token); // Here I am getting error System.out.println("Session string: " + session); long userid = client.users_getLoggedInUser(); System.out.println("User Id is : " + userid); } catch (Exception e) { e.printStackTrace(); } } public static void main(String k[]) { FaceLogin facebookLoginObj=new FaceLogin(); } } I am getting the following exception: org.apache.commons.httpclient.HttpMethodBase processResponseHeaders WARNING: Cookie rejected: "$Version=0; $Domain=deleted; $Path=/; $Domain=.facebook.com". Cookie name may not start with $ Response : 200 Jun 8, 2010 2:07:36 PM org.apache.commons.httpclient.HttpMethodBase processResponseHeaders WARNING: Cookie rejected: "$Version=0; $Path=deleted; $Path=/; $Domain=.facebook.com". Cookie name may not start with $ Facebook returns error code 100 com.facebook.api.FacebookException: Invalid parameter - v - 1.0 at com.facebook.api.FacebookRestClient.callMethod(FacebookRestClient.java:828) - auth_token - 004e90dc8818d5f0921d1065d24508d3 at com.facebook.api.FacebookRestClient.callMethod(FacebookRestClient.java:606) - method - facebook.auth.getSession - call_id - 1275986256796 - api_key - f7cb1e48c383ef599da9021fc4dec322 at com.facebook.api.FacebookRestClient.auth_getSession(FacebookRestClient.java:1891) at facebookcrawler.FacebookLogin.getUserID(FacebookLogin.java:81) at facebookcrawler.FacebookLogin.( - sig - 9344ec75b74a0a87bcae645046d45da8 FacebookLogin.java:24) at facebookcrawler.FaceLogin.main(FaceLogin.java:80) Here may be the problem is for creating session, i searched for all the solutions on net but could not helped me to get login. Please help me if you can suggest me some way to resolve this problem. i thanks to all your valuable suggestion.

    Read the article

  • Enunciate http error 404

    - by malakan
    Hi, I tried to setup a simple project with spring and enunciate+jax-ws/jax-rs annotation, but I didn't get it work. I used some great tutorial for the enunciate integration tutorial as template: http://docs.codehaus.org/display/ENUNCIATE/A+Rich+Web+service+API+for+your+favorite+framework Enunciate create the api-page like in the tutorial, but I get this error : If I open a mount point, for example the REST one (/rest/Service/getService/1), I'll get a 404-Error: NOT_FOUND Here is sample of my code : pom.xml: <plugin> <groupId>org.codehaus.enunciate</groupId> <artifactId>maven-enunciate-spring-plugin</artifactId> <version>1.19</version> <configuration> <configFile>src/main/webapp/WEB-INF/enunciate.xml</configFile> </configuration> <executions> <execution> <goals> <goal>assemble</goal> </goals> </execution> </executions> </plugin> enunciate.xml: <api-classes> <include pattern="com.myProject.model.*"/> <include pattern="com.myProject.services.MyService"/> <include pattern="com.myProject.services.MyServiceImpl"/> </api-classes> <webapp mergeWebXML="web.xml"/> <modules> <docs docsDir="api" title="myApp API"/> <spring-app> <springImport file="spring/applicationContext-config.xml"/> </spring-app> </modules> my service : package com.myProject.services; import java.util.List; import javax.jws.WebService; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import com.myProject.model.*; @WebService public interface MyService { @GET @Path("getService/{id}") public Service getService(@PathParam(value = "id")Integer id); } MyServiceImpl: package com.myProject.services; import javax.jws.WebService; ...(all import) import javax.persistence.EntityManager; @Service @Path("/Service") @WebService (endpointInterface="com.myProjects.services.MyService") @RemotingDestination(channels={"my-amf"}) public class MyServiceImpl implements MyService { private final Log logger = LogFactory.getLog(getClass()); @Autowired private MyServiceDao myServiceDao ; /*@Inject private MyServiceDao myServiceDao ;*/ public Service getService (Integer id) { return myServiceDao.getService (id); } } and I put @XmlRootElement on the model. I have tried several configuration, I couldn't get the xml or json response...just 404. Doesn't anyone know what is wrong?

    Read the article

  • mysql_fetch_array() problem

    - by Marty
    So I have 3 DB tables that are all identical in every way (data is different) except the name of the table. I did this so I could use one piece of code with a switch like so: function disp_bestof($atts) { extract(shortcode_atts(array( 'topic' => '' ), $atts)); $connect = mysql_connect("localhost","foo","bar"); if (!$connect) { die('Could not connect: ' . mysql_error()); } switch ($topic) { case "attorneys": $bestof_query = "SELECT * FROM attorneys p JOIN (awards a, categories c, awardLevels l) ON (a.id = p.id AND c.id = a.category AND l.id = a.level) ORDER BY a.category, a.level ASC"; $category_query = "SELECT * FROM categories"; $db = mysql_select_db('roanoke_BestOf_TopAttorneys'); $query = mysql_query($bestof_query); $categoryQuery = mysql_query($category_query); break; case "physicians": $bestof_query = "SELECT * FROM physicians p JOIN (awards a, categories c, awardLevels l) ON (a.id = p.id AND c.id = a.category AND l.id = a.level) ORDER BY a.category, a.level ASC"; $category_query = "SELECT * FROM categories"; $db = mysql_select_db('roanoke_BestOf_TopDocs'); $query = mysql_query($bestof_query); $categoryQuery = mysql_query($category_query); break; case "dining": $bestof_query = "SELECT * FROM restaurants p JOIN (awards a, categories c, awardLevels l) ON (a.id = p.id AND c.id = a.category AND l.id = a.level) ORDER BY a.category, a.level ASC"; $category_query = "SELECT * FROM categories"; $db = mysql_select_db('roanoke_BestOf_DiningAwards'); $query = mysql_query($bestof_query); $categoryQuery = mysql_query($category_query); break; default: $bestof_query = "switch on $best did not match required case(s)"; break; } $category = ''; while( $result = mysql_fetch_array($query) ) { if( $result['category'] != $category ) { $category = $result['category']; //echo "<div class\"category\">"; $bestof_content .= "<h2>".$category."</h2>\n"; //echo "<ul>"; Now, this whole thing works PERFECT for the first two cases, but the third one "dining" breaks with this error: Warning: mysql_fetch_assoc(): supplied argument is not a valid MySQL result resource ... on line 78 Line 78 is the while() at the bottom. I have checked and double checked and can't figure what the problem is. Here's the DB structure for 'restaurants': CREATE TABLE `restaurants` ( `id` int(10) NOT NULL auto_increment, `restaurant` varchar(255) default NULL, `address1` varchar(255) default NULL, `address2` varchar(255) default NULL, `city` varchar(255) default NULL, `state` varchar(255) default NULL, `zip` double default NULL, `phone` double default NULL, `URI` varchar(255) default NULL, `neighborhood` varchar(255) default NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=249 DEFAULT CHARSET=utf8 Does anyone see what I'm doing wrong here? I'm passing "dining" to the function and as I said before, the first two cases in the switch work fine. I'm sure it's something stupid...

    Read the article

  • Safely defining variables for public callback functions in javascript

    - by djreed
    I am working with the YouTube iFrame API to embed a number of videos on a page. Documentation here: https://developers.google.com/youtube/iframe_api_reference#Requirements In summary, you load the API asynchronously using the following snippet: var tag = document.createElement('script'); tag.src = "http://www.youtube.com/player_api"; var firstScriptTag = document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); Once loaded, the API fires the predefined callback function onYouTubePlayerAPIReady. For additional context: I am defining a library file for this in Google Closure. I am providing a namespace: goog.provide('yt.video'); I then use goog.exportSymbol so that the API can find the function. That all works fine. My challenge is that I would like to pass 2 variables to the callback function. Is there any way to do this without defining these 2 variables in the context of the window object? goog.provide('yt.video'); goog.require('goog.dom'); yt.video = function(videos, locales) { this.videos = videos; this.captionLocales = locales; this.init(); }; yt.video.prototype.init = function() { var tag = document.createElement('script'); tag.src = "http://www.youtube.com/player_api"; var firstScriptTag = document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); }; /* * Callback function fired when YT API is ready * This is exported using goog.exportSymbol in another file and * is being fired by the API properly. */ yt.video.prototype.onPlayerReady = function(videos, locales) { window.console.log('this :' + this); //logs window window.console.log('this.videos : ' + this.videos); //logs undefined /* * Video settings from Django variable */ for(i=0; i<this.videos.length; i++) { var playerEvents = {}; var embedVars = {}; var el = this.videos[i].el; var playerVid = this.videos[i].vid; var playerWidth = this.videos[i].width; var playerHeight = this.videos[i].height; var captionLocales = this.videos[i].locales; if(this.videos[i].playerVars) var embedVars = this.videos[i].playerVars; } if(this.videos[i].events) { var playerEvents = this.videos[i].events; } /* * Show captions by default */ if(goog.array.indexOf(captionLocales, 'es') >= 0) { embedVars.cc_load_policy = 1; }; new YT.Player(el, { height: playerHeight, width: playerWidth, videoId: playerVid, events: playerEvents, playerVars: embedVars }); }; }; To intialize this, I am currently using the following within a self-executing anonymous function: var videos = [ {"vid": "video_id", "el": "player-1", "width": 640, "height": 390, "locales": ["es", "fr"], "events": {"onStateChange": stateChanged}}, {"vid": "video_id", "el": "player-2", "locales": ["es", "fr"], "width": 640, "height": 390} ]; var locales = ['es']; var videoTemplate = new yt.video(videos, locales);

    Read the article

  • Releasing Shrinkr – An ASP.NET MVC Url Shrinking Service

    - by kazimanzurrashid
    Few months back, I started blogging on developing a Url Shrinking Service in ASP.NET MVC, but could not complete it due to my engagement with my professional projects. Recently, I was able to manage some time for this project to complete the remaining features that we planned for the initial release. So I am announcing the official release, the source code is hosted in codeplex, you can also see it live in action over here. The features that we have implemented so far: Public: OpenID Login. Base 36 and 62 based Url generation. 301 and 302 Redirect. Custom Alias. Maintaining Generated Urls of User. Url Thumbnail. Spam Detection through Google Safe Browsing. Preview Page (with google warning). REST based API for URL shrinking (json/xml/text). Control Panel: Application Health monitoring. Marking Url as Spam/Safe. Block/Unblock User. Allow/Disallow User API Access. Manage Banned Domains Manage Banned Ip Address. Manage Reserved Alias. Manage Bad Words. Twitter Notification when spam submitted. Behind the scene it is developed with: Entity Framework 4 (Code Only) ASP.NET MVC 2 AspNetMvcExtensibility Telerik Extensions for ASP.NET MVC (yes you can you use it freely in your open source projects) DotNetOpenAuth Elmah Moq xUnit.net jQuery We will be also be releasing  a minor update in few weeks which will contain some of the popular twitter client plug-ins and samples how to use the REST API, we will also try to include the nHibernate + Spark version in that release. In the next release, not sure about the timeline, we will include the Geo-Coding and some rich reporting for both the User and the Administrators. Enjoy!!!

    Read the article

  • Development costs of an indie multiplayer arcade shooter

    - by VorteX
    Me and a friend of mine have been wanting to remake one of our favorite games of all time (007: Nightfire) for a long time now. However, remaking a game likes this is really complicated because of the rights to the Bond-franchise. That site was created months ago, and by doing so we have found some great people (modelers, level designers, etc.) that want to help, but our plans have changed a little bit. Our current plan is to create a multiplayer-only remake of the original game, removing all the Bond-references so that the rights shouldn't be a problem anymore. We still want to create the game using the UDK and SteamWorks for both PC and Mac. Currently there's 3 things I need to find out: The costs of creating an arcade shooter like this. We want to use crowdfunding to fund the project. The best way to manage a project like this over the internet. Our current team consists of people all over the world, and we need a central place to discuss, collaborate and store our files. The best place to find suitable people for this project. We already have some modelers and level designers but we also need animators, artists, programmers, etc. I believe creating an arcade game like this with a small team is feasible. The game in a nutshell: ±10 maps, ±20 weapons, ±12 game modes, weapon/armor pickups, grapple hook gadget, no ADS, uses SteamWorks, online matchmaking, custom games, AI bots, appearance selection, level progression using XP (no unlocks), achievements. Does anyone know where to start? Any help is appreciated.

    Read the article

  • SQL SERVER – Concurrency Basics – Guest Post by Vinod Kumar

    - by pinaldave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions from SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. Learning is always fun when it comes to SQL Server and learning the basics again can be more fun. I did write about Transaction Logs and recovery over my blogs and the concept of simplifying the basics is a challenge. In the real world we always see checks and queues for a process – say railway reservation, banks, customer supports etc there is a process of line and queue to facilitate everyone. Shorter the queue higher is the efficiency of system (a.k.a higher is the concurrency). Every database does implement this using checks like locking, blocking mechanisms and they implement the standards in a way to facilitate higher concurrency. In this post, let us talk about the topic of Concurrency and what are the various aspects that one needs to know about concurrency inside SQL Server. Let us learn the concepts as one-liners: Concurrency can be defined as the ability of multiple processes to access or change shared data at the same time. The greater the number of concurrent user processes that can be active without interfering with each other, the greater the concurrency of the database system. Concurrency is reduced when a process that is changing data prevents other processes from reading that data or when a process that is reading data prevents other processes from changing that data. Concurrency is also affected when multiple processes are attempting to change the same data simultaneously. Two approaches to managing concurrent data access: Optimistic Concurrency Model Pessimistic Concurrency Model Concurrency Models Pessimistic Concurrency Default behavior: acquire locks to block access to data that another process is using. Assumes that enough data modification operations are in the system that any given read operation is likely affected by a data modification made by another user (assumes conflicts will occur). Avoids conflicts by acquiring a lock on data being read so no other processes can modify that data. Also acquires locks on data being modified so no other processes can access the data for either reading or modifying. Readers block writer, writers block readers and writers. Optimistic Concurrency Assumes that there are sufficiently few conflicting data modification operations in the system that any single transaction is unlikely to modify data that another transaction is modifying. Default behavior of optimistic concurrency is to use row versioning to allow data readers to see the state of the data before the modification occurs. Older versions of the data are saved so a process reading data can see the data as it was when the process started reading and not affected by any changes being made to that data. Processes modifying the data is unaffected by processes reading the data because the reader is accessing a saved version of the data rows. Readers do not block writers and writers do not block readers, but, writers can and will block writers. Transaction Processing A transaction is the basic unit of work in SQL Server. Transaction consists of SQL commands that read and update the database but the update is not considered final until a COMMIT command is issued (at least for an explicit transaction: marked with a BEGIN TRAN and the end is marked by a COMMIT TRAN or ROLLBACK TRAN). Transactions must exhibit all the ACID properties of a transaction. ACID Properties Transaction processing must guarantee the consistency and recoverability of SQL Server databases. Ensures all transactions are performed as a single unit of work regardless of hardware or system failure. A – Atomicity C – Consistency I – Isolation D- Durability Atomicity: Each transaction is treated as all or nothing – it either commits or aborts. Consistency: ensures that a transaction won’t allow the system to arrive at an incorrect logical state – the data must always be logically correct.  Consistency is honored even in the event of a system failure. Isolation: separates concurrent transactions from the updates of other incomplete transactions. SQL Server accomplishes isolation among transactions by locking data or creating row versions. Durability: After a transaction commits, the durability property ensures that the effects of the transaction persist even if a system failure occurs. If a system failure occurs while a transaction is in progress, the transaction is completely undone, leaving no partial effects on data. Transaction Dependencies In addition to supporting all four ACID properties, a transaction might exhibit few other behaviors (known as dependency problems or consistency problems). Lost Updates: Occur when two processes read the same data and both manipulate the data, changing its value and then both try to update the original data to the new value. The second process might overwrite the first update completely. Dirty Reads: Occurs when a process reads uncommitted data. If one process has changed data but not yet committed the change, another process reading the data will read it in an inconsistent state. Non-repeatable Reads: A read is non-repeatable if a process might get different values when reading the same data in two reads within the same transaction. This can happen when another process changes the data in between the reads that the first process is doing. Phantoms: Occurs when membership in a set changes. It occurs if two SELECT operations using the same predicate in the same transaction return a different number of rows. Isolation Levels SQL Server supports 5 isolation levels that control the behavior of read operations. Read Uncommitted All behaviors except for lost updates are possible. Implemented by allowing the read operations to not take any locks, and because of this, it won’t be blocked by conflicting locks acquired by other processes. The process can read data that another process has modified but not yet committed. When using the read uncommitted isolation level and scanning an entire table, SQL Server can decide to do an allocation order scan (in page-number order) instead of a logical order scan (following page pointers). If another process doing concurrent operations changes data and move rows to a new location in the table, the allocation order scan can end up reading the same row twice. Also can happen if you have read a row before it is updated and then an update moves the row to a higher page number than your scan encounters later. Performing an allocation order scan under Read Uncommitted can cause you to miss a row completely – can happen when a row on a high page number that hasn’t been read yet is updated and moved to a lower page number that has already been read. Read Committed Two varieties of read committed isolation: optimistic and pessimistic (default). Ensures that a read never reads data that another application hasn’t committed. If another transaction is updating data and has exclusive locks on data, your transaction will have to wait for the locks to be released. Your transaction must put share locks on data that are visited, which means that data might be unavailable for others to use. A share lock doesn’t prevent others from reading but prevents them from updating. Read committed (snapshot) ensures that an operation never reads uncommitted data, but not by forcing other processes to wait. SQL Server generates a version of the changed row with its previous committed values. Data being changed is still locked but other processes can see the previous versions of the data as it was before the update operation began. Repeatable Read This is a Pessimistic isolation level. Ensures that if a transaction revisits data or a query is reissued the data doesn’t change. That is, issuing the same query twice within a transaction cannot pickup any changes to data values made by another user’s transaction because no changes can be made by other transactions. However, this does allow phantom rows to appear. Preventing non-repeatable read is a desirable safeguard but cost is that all shared locks in a transaction must be held until the completion of the transaction. Snapshot Snapshot Isolation (SI) is an optimistic isolation level. Allows for processes to read older versions of committed data if the current version is locked. Difference between snapshot and read committed has to do with how old the older versions have to be. It’s possible to have two transactions executing simultaneously that give us a result that is not possible in any serial execution. Serializable This is the strongest of the pessimistic isolation level. Adds to repeatable read isolation level by ensuring that if a query is reissued rows were not added in the interim, i.e, phantoms do not appear. Preventing phantoms is another desirable safeguard, but cost of this extra safeguard is similar to that of repeatable read – all shared locks in a transaction must be held until the transaction completes. In addition serializable isolation level requires that you lock data that has been read but also data that doesn’t exist. Ex: if a SELECT returned no rows, you want it to return no. rows when the query is reissued. This is implemented in SQL Server by a special kind of lock called the key-range lock. Key-range locks require that there be an index on the column that defines the range of values. If there is no index on the column, serializable isolation requires a table lock. Gets its name from the fact that running multiple serializable transactions at the same time is equivalent of running them one at a time. Now that we understand the basics of what concurrency is, the subsequent blog posts will try to bring out the basics around locking, blocking, deadlocks because they are the fundamental blocks that make concurrency possible. Now if you are with me – let us continue learning for SQL Server Locking Basics. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Concurrency

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >