Search Results

Search found 13193 results on 528 pages for 'analytics api'.

Page 187/528 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • are custom classes imported API included in .class files?

    - by kyrogue
    i have a question, i have a custom class which imports java.sql.; and i am creating a jsp page, in the jsp page, i did a page import of the custom class , however when i tried to call my custom class database methods it cant work. only when i did a page import of java.sql. did it work. so are custom classes imported API included in .class files? An error occurred at line: 6 in the jsp file: /resetpw.jsp Statement cannot be resolved to a type 3: 4: <% 5: db.connect(); 6: Statement stmt = db.getConnection().createStatement(); 7: ResultSet rs = stmt.executeQuery("SELECT * FROM created_accounts"); 8: 9: An error occurred at line: 7 in the jsp file: /resetpw.jsp ResultSet cannot be resolved to a type 4: <% 5: db.connect(); 6: Statement stmt = db.getConnection().createStatement(); 7: ResultSet rs = stmt.executeQuery("SELECT * FROM created_accounts"); 8: 9: 10: Stacktrace: org.apache.jasper.compiler.DefaultErrorHandler.javacError(DefaultErrorHandler.java:93) org.apache.jasper.compiler.ErrorDispatcher.javacError(ErrorDispatcher.java:330) org.apache.jasper.compiler.JDTCompiler.generateClass(JDTCompiler.java:451) org.apache.jasper.compiler.Compiler.compile(Compiler.java:319) org.apache.jasper.compiler.Compiler.compile(Compiler.java:298) org.apache.jasper.compiler.Compiler.compile(Compiler.java:286) org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:565) org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:309) org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:308) org.apache.jasper.servlet.JspServlet.service(JspServlet.java:259) javax.servlet.http.HttpServlet.service(HttpServlet.java:729) note The full stack trace of the root cause is available in the Apache Tomcat/5.5.31 logs. edited. added what error will come up if i did not do a page import of java.sql.*;

    Read the article

  • What's the best way to communicate the purpose of a string parameter in a public API?

    - by Dave
    According to the guidance published in New Recommendations for Using Strings in Microsoft .NET 2.0, the data in a string may exhibit one of the following types of behavior: A non-linguistic identifier, where bytes match exactly. A non-linguistic identifier, where case is irrelevant, especially a piece of data stored in most Microsoft Windows system services. Culturally-agnostic data, which still is linguistically relevant. Data that requires local linguistic customs. Given that, I'd like to know the best way to communicate which behavior is expected of a string parameter in a public API. I wasn't able to find an answer in the Framework Design Guidelines. Consider the following methods: f(string this_is_a_linguistic_string) g(string this_is_a_symbolic_identifier_so_use_ordinal_compares) Is variable naming and XML documentation the best I can do? Could I use attributes in some way to mark the requirements of the string? Now consider the following case: h(Dictionary<string, object> dictionary) Note that the dictionary instance is created by the caller. How do I communicate that the callee expects the IEqualityComparer<string> object held by the dictionary to perform, for example, a case-insensitive ordinal comparison?

    Read the article

  • In Sharepoint, how do I update the name of a folder in a document library using the web service API?

    - by Jess
    I'm using the UpdateListItems method of the Lists web service, and I can update an item in just about any kind of list, and folders in non-document library lists, but I can't seem to update the name of a folder in a document library. I must use the web services API, as sharepoint is not local. If my update batch looks like this: <Batch OnError="Continue" PreCalc="TRUE" ListVersion="0" ViewName=""> <Method ID="1" Cmd="Update"> <Field Name="ID">2</Field> <Field Name="Title">MyUpdatedFolderName</Field> <Field Name="FileLeafRef">MyUpdatedFolderName</Field> </Method> </Batch> I get no exception but the name is unchanged. If my update batch looks like this: <Batch OnError="Continue" PreCalc="TRUE" ListVersion="0" ViewName=""> <Method ID="1" Cmd="Update"> <Field Name="ID">2</Field> <Field Name="Title">MyUpdatedFolderName</Field> <Field Name="BaseName">MyUpdatedFolderName</Field> </Method> </Batch> I get an error result that the list item could not be found. I know the list item is there. Anyone have any ideas?

    Read the article

  • How to create a progress bar while downloading a file using the windows API?

    - by Jorge Chayan
    i'm working on an application in MS Visual C++ using Windows API that must download a file and place it in a folder. I have already implemented the download using URLDownloadToFile function, but i want to create a PROGRESS_CLASS progress bar with marquee style while the file is being downloaded, but it doesn't seems to get animated in the process. This is the function I use for downloading: BOOL SOXDownload() { HRESULT hRez = URLDownloadToFile(NULL, "url","C:\\sox.zip", 0, NULL); if (hRez == E_OUTOFMEMORY ) { MessageBox(hWnd, "Out of memory Error","", MB_OK); return FALSE; } if (hRez != S_OK) { MessageBox(hWnd, "Error downloading sox.", "Error!", MB_ICONERROR | MB_SYSTEMMODAL); return FALSE; } if (hRez == S_OK) { BSTR file = SysAllocString(L"C:\\sox.zip"); BSTR folder = SysAllocString(L"C:\\"); Unzip2Folder(file, folder); ::MessageBoxA(hWnd, "Sox Binaries downloaded succesfully", "Success", MB_OK); } return TRUE; } Later I call inside WM_CREATE (in my main window's message processor): if (!fileExists("C:\\SOX\\SOX.exe")) { components[7] = CreateWindowEx(0, PROGRESS_CLASS, NULL, WS_VISIBLE | PBS_MARQUEE, GetSystemMetrics(SM_CXSCREEN) / 2 - 80, GetSystemMetrics(SM_CYSCREEN) / 2 + 25, 200, 50, hWnd, NULL, NULL, NULL); SetWindowText(components[7], "Downloading SoX"); SendMessage(components[7], PBM_SETRANGE, 0, (LPARAM) MAKELPARAM(0, 50)); SendMessage(components[7], PBM_SETMARQUEE, TRUE, MAKELPARAM( 0, 50)); SOXDownload(); SendMessage(components[7], WM_CLOSE, NULL, NULL); } And as I want, I get a tiny progress bar... But it's not animated, and when I place the cursor over the bar, the cursor indicates that the program is busy downloading the file. When the download is complete, the window closes as i requested: SendMessage(components[7], WM_CLOSE, NULL, NULL); So the question is how can I make the bar move while downloading the file? Considering that i want it done with marquee style for simplicity. Thanks in advance.

    Read the article

  • Lookback API: How long is a defect in a particular state?

    - by user1195996
    We have a state in our defects called "Need More Information". I would like to create a graph over time of how many defects are in that state at any particular period of time. I think I can get the info to do that with the Lookback API with the following query: my $find = { State => 'Need More Information', '_PreviousValues.State' => {'$ne' => 'Need More Information'}, _TypeHierarchy => -51006, # defect _ValidFrom => { '$gte' => '2012-09-01TZ', '$lt' => '2012-10-23TZ', } I thought that would give me back a list of all defect snapshots where the defect was transitioning into "Need More Information" state, but it does not (seems to list everything that was ever in "Need More Information" state. Technically what I need is a query that lists snapshots of any defects transitioning either TO OR FROM the "Need More Information" state, but since this simpler one did not seem to work as I expected, I thought I would ask first why the query above did not work the way I expected.

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • What SDK I should choose on Amazon Web Service to build API on server app?

    - by Nguyen Minh Binh
    I am a newbie on Amazon Web Service. I have a task that setup then build a web service that provide APIs to Mac OS, iOS, Android client. There are some APIs and Database need to be kept in secure. I see that AWS support multiple platform such as Java, .Net, PHP,... It also support many Database Management System. Not yet, there are 2 special SDK for Android and iOS app. So, What should I choose (Java, .Net, PHP,...) to carry out my task? Does AWS support all webservice protocol? Does it support secure webservice? Thanks a lot.

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • Screen Snip/Capture Utility callable from API or command line?

    - by Raymond
    I am looking for a screen capture tool that is controllable from the command line. Ideally I would pass the filename, and possibly some options (capture mouse pointer or not, etc..), then it would take control, allow the user to select a region of the screen, and then save it to the specified file and disappear. screensnip.exe /output c:\users\public\pictures\screenshot.png /hidecursor user draws rectangle on screen control passes back to batch file, script, etc....

    Read the article

  • Do I need to recompile PHP to make use of CURL API?

    - by amn
    I have both Apache and PHP set up manually, albeit the latter without CURL. There is this jungle of instructions and explanations on extensions for PHP. I have a very straightforward question - what do I need to do to enable CURL in a more dynamic way. I resent the idea of static linking, in fact I hate and avoid static linking like the plague. Is it possible to have my Apache and PHP understand that there is CURL in town? I can compile CURL if necessary. Package management may be out of the question, because I built PHP myself - I am on Ubuntu, and it does not provide PHP without Suhosin and a a whole lot of time, so I removed it and built PHP myself. The whole slew of related questions simply propse installing "php5-curl" package, which is exactly one thing I CANNOT do since it installs it in a completely unrelated directory, which my PHP does not even seem to bother linking to.

    Read the article

  • Beaglebone Black running Debian, does device tree overlay act as an api?

    - by user3953989
    This maybe more of a Linux specific question but... I've been reading many tutorials and it seems that you can use JavaScript, Python, and C++ to write code for the Beaglebone Black(BBB). It looks like the way C++ interfaces with the BBB hardware is via reading/writing text files on the OS while Python has it's own library. All the C++ examples out there control the GPIO and PWM via reading/writing to text files. Is this the only way to access the hardware or just how Linux does drivers?

    Read the article

  • BI and EPM Landscape

    - by frank.buytendijk
    Most of my blog entries are not about Oracle products, and most of the latest entries are about topics such as IT strategy and enterprise architecture. However, given my background at Gartner, and at Hyperion, I still keep a close eye on what's happening in BI and EPM. One important reason is that I believe there is significant competitive value for organizations getting BI and EPM right. Davenport and Harris wrote a great book called "Competing on Analytics", in which they explain this in a very engaging and convincing way. At Oracle we have defined the concept of "management excellence" that outlines what organizations have to do to keep or create a competitive edge. It's not only in the business processes, but also in the management processes. Recently, Gartner published its 2009 market shares report for BI, Analytics, and Performance Management. Gartner identifies the same three segments that Oracle does: (1) CPM Suites (Oracle refers not to Corporate Performance Management, but Enterprise Performance Management), (2) BI Platform, and (3) Analytic Applications & Performance Management. According to Gartner, Oracle's share is increasing with revenue growing by more than 5%. Oracle currently holds the #2 market share position in the overall BI Software space based on total BI software revenue. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 Gartner has ranked Oracle as #1 in the CPM Suites worldwide sub-segment based on total BI software revenue, and Oracle is gaining share with revenue growing by more than 6% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 The Analytic Applications & Performance Management subsegment is more fragmented. It has for instance a very large "Other Vendors" category. The largest player traditionally is SAS. Analytic Applications are often meant for very specific analytic needs in very specific industry sectors. According to Gartner, from the large vendors, again Oracle is the one who is gaining the most share - with total BI software revenue growth close to 15% in 2009. Source: Gartner Dataquest Market Share: Business Intelligence, Analytics and Performance Management Software, Worldwide, 2009; Dan Sommer and Bhavish Sood; Apr 2010 I believe this shows Oracle's integration strategy is working. In fact, integration actually is the innovation. BI and EPM have been silo technology platforms and application suites way too long. Management and measuring performance should be very closely linked to strategy execution, which is the domain of other business application areas such as CRM, ERP, and Supply Chain. BI and EPM are not about "making better decisions" anymore, but are part of a tangible action framework. Furthermore, organizations are getting more serious about ecosystem thinking. They do not evaluate single tools anymore for different application areas, but buy into a complete ecosystem of hardware, software and services. The best ecosystem is the one that offers the most options, in environments where the uncertainty is high and investments are hard to reverse. The key to successfully managing such an environment is middleware, and BI and EPM become increasingly middleware intensive. In fact, given the horizontal nature of BI and EPM, sitting on top of all business functions and applications, you could call them "upperware". Many are active in the BI and EPM space. Big players can offer a lot, but there are always many areas that are covered by specialty vendors. Oracle openly embraces those technologies within the ecosystem as well. Complete, open and integrated still accurately describes the Oracle product strategy. frank

    Read the article

  • SQL analytical mash-ups deliver real-time WOW! for big data

    - by KLaker
    One of the overlooked capabilities of SQL as an analysis engine, because we all just take it for granted, is that you can mix and match analytical features to create some amazing mash-ups. As we move into the exciting world of big data these mash-ups can really deliver those "wow, I never knew that" moments. While Java is an incredibly flexible and powerful framework for managing big data there are some significant challenges in using Java and MapReduce to drive your analysis to create these "wow" discoveries. One of these "wow" moments was demonstrated at this year's OpenWorld during Andy Mendelsohn's general keynote session.  Here is the scenario - we are looking for fraudulent activities in our big data stream and in this case we identifying potentially fraudulent activities by looking for specific patterns. We using geospatial tagging of each transaction so we can create a real-time fraud-map for our business users. Where we start to move towards a "wow" moment is to extend this basic use of spatial and pattern matching, as shown in the above dashboard screen, to incorporate spatial analytics within the SQL pattern matching clause. This will allow us to compute the distance between transactions. Apologies for the quality of this screenshot….hopefully below you see where we have extended our SQL pattern matching clause to use location of each transaction and to calculate the distance between each transaction: This allows us to compare the time of the last transaction with the time of the current transaction and see if the distance between the two points is possible given the time frame. Obviously if I buy something in Florida from my favourite bike store (may be a new carbon saddle for my Trek) and then 5 minutes later the system sees my credit card details being used in Arizona there is high probability that this transaction in Arizona is actually fraudulent (I am fast on my Trek but not that fast!) and we can flag this up in real-time on our dashboard: In this post I have used the term "real-time" a couple of times and this is an important point and one of the key reasons why SQL really is the only language to use if you want to analyse  big data. One of the most important questions that comes up in every big data project is: how do we do analysis? Many enlightened customers are now realising that using Java-MapReduce to deliver analysis does not result in "wow" moments. These "wow" moments only come with SQL because it is offers a much richer environment, it is simpler to use and it is faster - which makes it possible to deliver real-time "Wow!". Below is a slide from Andy's session showing the results of a comparison of Java-MapReduce vs. SQL pattern matching to deliver our "wow" moment during our live demo.  You can watch our analytical mash-up "Wow" demo that compares the power of 12c SQL pattern matching + spatial analytics vs. Java-MapReduce  here: You can get more information about SQL Pattern Matching on our SQL Analytics home page on OTN, see here http://www.oracle.com/technetwork/database/bi-datawarehousing/sql-analytics-index-1984365.html.  You can get more information about our spatial analytics here: http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html If you would like to watch the full Database 12c OOW presentation see here: http://medianetwork.oracle.com/video/player/2686974264001

    Read the article

  • EPM 11.1.2.2 Architecture: Financial Performance Management Applications

    - by Marc Schumacher
     Financial Management can be accessed either by a browser based client or by SmartView. Starting from release 11.1.2.2, the Financial Management Windows client does not longer access the Financial Management Consolidation server. All tasks that require an on line connection (e.g. load and extract tasks) can only be done using the web interface. Any client connection initiated by a browser or SmartView is send to the Oracle HTTP server (OHS) first. Based on the path given (e.g. hfmadf, hfmofficeprovider) in the URL, OHS makes a decision to forward this request either to the new Financial Management web application based on the Oracle Application Development Framework (ADF) or to the .NET based application serving SmartView retrievals running on Internet Information Server (IIS). Any requests send to the ADF web interface that need to be processed by the Financial Management application server are send to the IIS using HTTP protocol and will be forwarded further using DCOM to the Financial Management application server. SmartView requests, which are processes by IIS in first row, are forwarded to the Financial Management application server using DCOM as well. The Financial Management Application Server uses OLE DB database connections via native database clients to talk to the Financial Management database schema. Communication between the Financial Management DME Listener, which handles requests from EPMA, and the Financial Management application server is based on DCOM.  Unlike most other components Essbase Analytics Link (EAL) does not have an end user interface. The only user interface is a plug-in for the Essbase Administration Services console, which is used for administration purposes only. End users interact with a Transparent or Replicated Partition that is created in Essbase and populated with data by EAL. The Analytics Link Server deployed on WebLogic communicates through HTTP protocol with the Analytics Link Financial Management Connector that is deployed in IIS on the Financial Management web server. Analytics Link Server interacts with the Data Synchronisation server using the EAL API. The Data Synchronization server acts as a target of a Transparent or Replicated Partition in Essbase and uses a native database client to connect to the Financial Management database. Analytics Link Server uses JDBC to connect to relational repository databases and Essbase JAPI to connect to Essbase.  As most Oracle EPM System products, browser based clients and SmartView can be used to access Planning. The Java based Planning web application is deployed on WebLogic, which is configured behind an Oracle HTTP Server (OHS). Communication between Planning and the Planning RMI Registry Service is done using Java Remote Message Invocation (RMI). Planning uses JDBC to access relational repository databases and talks to Essbase using the CAPI. Be aware of the fact that beside the Planning System database a dedicated database schema is needed for each application that is set up within Planning.  As Planning, Profitability and Cost Management (HPCM) has a pretty simple architecture. Beside the browser based clients and SmartView, a web service consumer can be used as a client too. All clients access the Java based web application deployed on WebLogic through Oracle HHTP Server (OHS). Communication between Profitability and Cost Management and EPMA Web Server is done using HTTP protocol. JDBC is used to access the relational repository databases as well as data sources. Essbase JAPI is utilized to talk to Essbase.  For Strategic Finance, two clients exist, SmartView and a Windows client. While SmartView communicates through the web layer to the Strategic Finance Server, Strategic Finance Windows client makes a direct connection to the Strategic Finance Server using RPC calls. Connections from Strategic Finance Web as well as from Strategic Finance Web Services to the Strategic Finance Server are made using RPC calls too. The Strategic Finance Server uses its own file based data store. JDBC is used to connect to the EPM System Registry from web and application layer.  Disclosure Management has three kinds of clients. While the browser based client and SmartView interact with the Disclosure Management web application directly through Oracle HTTP Server (OHS), Taxonomy Designer does not connect to the Disclosure Management server. Communication to relational repository databases is done via JDBC, to connect to Essbase the Essbase JAPI is utilized.

    Read the article

  • Enabling Google Webmaster Tools With Your GWB Blog

    - by ToStringTheory
    I’ll be honest and save you some time, if you don’t have your own domain for your GWB blog, this won’t help, you may just want to move on…  I don’t want to waste your time……… Still here?  Good.  How great are Google’s website tools?  I don’t just mean Analytics which rocks, but also their Webmaster Tools (https://www.google.com/webmasters/tools/) which gives you a glimpse into the queries that provide you your website traffic, search engine behavior on your site, and important keywords, just to name a few.   Pictured Above: Cool statistics. Problem Thanks to svickn over at wtfnext.com (another GeeksWithBlogs blog), we already have the knowledge on how to setup Google Analytics (wtfnext.com - How to: Set up Google Analytics on your GeeksWithBlogs blog).  However, one of the questions raised in the post, and even semi-answered in the questions, was how to setup Google Webmaster Tools with your blog as well. At first glance, it seems like it can’t be done.  Google graciously gives you several different options on how to authorize that you own a site.  The authentication options are: 1. (Recommended) – Upload an HTML file to your server 2. Add a meta tag to your site’s home page 3. Use your Google Analytics account 4. Add a DNS record to your domain’s configuration Since you don’t have access to the base path, you can’t do #1.  Same goes for #2 since you can’t edit the master/index page.  As for #3, they REQUIRE the Analytics code to be in the <head> section of your page, so even though we can use the workaround of hosting it in the news section, it won’t allow it since it isn’t in the correct place. Solution Last I checked, I didn’t see the DNS record option for Webmaster Tools.  Maybe this was recently added, or maybe I don’t remember it since I was always able to use some other method to authorize it.  In this case though, this is the option that we need.  My registrar wasn’t in their list, but they provide detailed enough instructions for the ‘Other’ option: Simply create a TXT record with your domain hoster (mine is DynDns), fill in the tag information, and then click verify.  My entry was able to be resolved immediately, but since you are working with DNS, it may take longer.  If after 24 hours you still aren’t able to verify, you can use a site such as mxtoolbox.com, and in the searchbox type “txt: {domain-name-here}”, to see if your TXT record was entered successfully. It is pretty simple to setup the TXT entry in DynDns, but if you have questions/comments, feel free to post them. Conclusion With this simple workaround (not really a workaround, but feature since they offer it..), you are now able to see loads of information regarding your standings in the world of the Google Search Engine.  No critical issues?  Did I do something wrong?! As an aside, you can do the same thing with the Bing Webmaster Tools by adding a CNAME record to bing.verify.com…  Instructions can be found on the ‘Add Site’ popup when adding your site. If you don’t have your own domain, but continued, to read to this point – thank you!

    Read the article

  • What output and recording ports does the Java Sound API find on your computer?

    - by Dave Carpeneto
    Hi all - I'm working with the Java Sound API, and it turns out if I want to adjust recording volumes I need to model the hardware that the OS exposes to Java. Turns out there's a lot of variety in what's presented. Because of this I'm humbly asking that anyone able to help me run the following on their computer and post back the results so that I can get an idea of what's out there. A thanks in advance to anyone that can assist :-) import javax.sound.sampled.*; public class SoundAudit { public static void main(String[] args) { try { System.out.println("OS: "+System.getProperty("os.name")+" "+ System.getProperty("os.version")+"/"+ System.getProperty("os.arch")+"\nJava: "+ System.getProperty("java.version")+" ("+ System.getProperty("java.vendor")+")\n"); for (Mixer.Info thisMixerInfo : AudioSystem.getMixerInfo()) { System.out.println("Mixer: "+thisMixerInfo.getDescription()+ " ["+thisMixerInfo.getName()+"]"); Mixer thisMixer = AudioSystem.getMixer(thisMixerInfo); for (Line.Info thisLineInfo:thisMixer.getSourceLineInfo()) { if (thisLineInfo.getLineClass().getName().equals( "javax.sound.sampled.Port")) { Line thisLine = thisMixer.getLine(thisLineInfo); thisLine.open(); System.out.println(" Source Port: " +thisLineInfo.toString()); for (Control thisControl : thisLine.getControls()) { System.out.println(AnalyzeControl(thisControl));} thisLine.close();}} for (Line.Info thisLineInfo:thisMixer.getTargetLineInfo()) { if (thisLineInfo.getLineClass().getName().equals( "javax.sound.sampled.Port")) { Line thisLine = thisMixer.getLine(thisLineInfo); thisLine.open(); System.out.println(" Target Port: " +thisLineInfo.toString()); for (Control thisControl : thisLine.getControls()) { System.out.println(AnalyzeControl(thisControl));} thisLine.close();}}} } catch (Exception e) {e.printStackTrace();}} public static String AnalyzeControl(Control thisControl) { String type = thisControl.getType().toString(); if (thisControl instanceof BooleanControl) { return " Control: "+type+" (boolean)"; } if (thisControl instanceof CompoundControl) { System.out.println(" Control: "+type+ " (compound - values below)"); String toReturn = ""; for (Control children: ((CompoundControl)thisControl).getMemberControls()) { toReturn+=" "+AnalyzeControl(children)+"\n";} return toReturn.substring(0, toReturn.length()-1);} if (thisControl instanceof EnumControl) { return " Control:"+type+" (enum: "+thisControl.toString()+")";} if (thisControl instanceof FloatControl) { return " Control: "+type+" (float: from "+ ((FloatControl) thisControl).getMinimum()+" to "+ ((FloatControl) thisControl).getMaximum()+")";} return " Control: unknown type";} } All the application does is print out a line about the OS, a line about the JVM, and a few lines about the hardware found that may pertain to recording hardware. For example on my PC at work I get the following: OS: Windows XP 5.1/x86 Java: 1.6.0_07 (Sun Microsystems Inc.) Mixer: Direct Audio Device: DirectSound Playback [Primary Sound Driver] Mixer: Direct Audio Device: DirectSound Playback [SoundMAX HD Audio] Mixer: Direct Audio Device: DirectSound Capture [Primary Sound Capture Driver] Mixer: Direct Audio Device: DirectSound Capture [SoundMAX HD Audio] Mixer: Software mixer and synthesizer [Java Sound Audio Engine] Mixer: Port Mixer [Port SoundMAX HD Audio] Source Port: MICROPHONE source port Control: Microphone (compound - values below) Control: Select (boolean) Control: Microphone Boost (boolean) Control: Front panel microphone (boolean) Control: Volume (float: from 0.0 to 1.0) Source Port: LINE_IN source port Control: Line In (compound - values below) Control: Select (boolean) Control: Volume (float: from 0.0 to 1.0) Control: Balance (float: from -1.0 to 1.0)

    Read the article

  • How can I use Web Services Core to send a complex type as a parameter to a SOAP API method

    - by Matthew Brindley
    I don't do much Cocoa programming, so I'm probably missing something obvious, so please excuse the basic question. I have a SOAP method that expects a complex type as a paramater. Here's some WSDL: <s:element name="SaveTestResult"> <s:complexType> <s:sequence> <s:element minOccurs="0" maxOccurs="1" name="result" type="tns:TestItemResponse" /> </s:sequence> </s:complexType> </s:element> Here's the definition of the complex type "TestItemResponse": <s:complexType name="TestItemResponse"> <s:sequence> <s:element minOccurs="1" maxOccurs="1" name="TestItemRequestId" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="ExternalId" type="s:int" /> <s:element minOccurs="0" maxOccurs="1" name="ApiId" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="InboxGuid" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="SpamResult" type="tns:SpamResult" /> <s:element minOccurs="0" maxOccurs="1" name="ResultImageSet" type="tns:ResultImageSet" /> <s:element minOccurs="1" maxOccurs="1" name="ExclusiveUseMailAccountId" type="s:int" /> <s:element minOccurs="1" maxOccurs="1" name="State" type="tns:TestItemResponseState" /> <s:element minOccurs="0" maxOccurs="1" name="ErrorShortDescription" type="s:string" /> <s:element minOccurs="0" maxOccurs="1" name="ErrorFullDescription" type="s:string" /> </s:sequence> </s:complexType> I've been using Web Services Core to call a SOAP API method that requires a simple string param, that works great. That same method returns a complex type which WSC converted into nested NSDictionaries, so no problems there. So I assumed I'd be able to convert my local TestItemResponse class into an NSDictionary and then use that as the complex type param. It almost worked, but unfortunately WSC set the object's type as "Dictionary", instead of "TestItemResponse", and the server complained. <TestItemResponse xsi:type=\"SOAP-ENC:Dictionary\"> <ErrorFullDescription xsi:type=\"xsd:string\">foo</ErrorFullDescription> ... I can't seem to find anything that allows you to override the type WSC assigns to the element in the SOAP XML. I've been using code adapted from here, I'm happy to list it, it's just quite long and this is already the longest SO question I've ever posted.

    Read the article

  • base for setQueueXmlPath

    - by antony.trupe
    I can't figure out how to point unit tests at the queue config file. Unit Test snippet // TaskQueue setup LocalTaskQueueTestConfig tqConfig = new LocalTaskQueueTestConfig(); tqConfig.setQueueXmlPath("/war/WEB_INF/queue.xml"); Stack Trace java.lang.IllegalStateException: The specified queue is unknown : zip-fetch at com.google.appengine.api.labs.taskqueue.QueueApiHelper.translateError(QueueApiHelper.java:56) at com.google.appengine.api.labs.taskqueue.QueueApiHelper.translateError(QueueApiHelper.java:111) at com.google.appengine.api.labs.taskqueue.QueueApiHelper.makeSyncCall(QueueApiHelper.java:32) at com.google.appengine.api.labs.taskqueue.QueueImpl.add(QueueImpl.java:310) at com.google.appengine.api.labs.taskqueue.QueueImpl.add(QueueImpl.java:282) at com.google.appengine.api.labs.taskqueue.QueueImpl.add(QueueImpl.java:267) at ...

    Read the article

  • Vertical Scroll not working, are the guides but the screen does not scroll.

    - by Leandro
    package com.lcardenas.infoberry; import net.rim.device.api.system.DeviceInfo; import net.rim.device.api.system.GPRSInfo; import net.rim.device.api.system.Memory; import net.rim.device.api.ui.MenuItem; import net.rim.device.api.ui.component.Dialog; import net.rim.device.api.ui.component.LabelField; import net.rim.device.api.ui.component.Menu; import net.rim.device.api.ui.component.SeparatorField; import net.rim.device.api.ui.container.MainScreen; import net.rim.device.api.ui.container.VerticalFieldManager; import net.rim.device.api.ui.decor.Background; import net.rim.device.api.ui.decor.BackgroundFactory; public class vtnprincipal extends MainScreen { //llamamos a la clase principal private InfoBerry padre; //variables para el menu private MenuItem mnubateria; private MenuItem mnuestado; private MenuItem mnuacerca; public vtnprincipal(InfoBerry padre) { super(); this.padre = padre; } public void incventana(){ VerticalFieldManager _ventana = new VerticalFieldManager(VerticalFieldManager.VERTICAL_SCROLL | VerticalFieldManager.VERTICAL_SCROLLBAR); double tmemoria =((DeviceInfo.getTotalFlashSize()/1024)/1024.00); double fmemoria = ((Memory.getFlashFree()/1024)/1024.00); Background cyan = BackgroundFactory.createSolidBackground(0x00E0FFFF); Background gris = BackgroundFactory.createSolidBackground(0x00DCDCDC ); //Borramos todos de la pantalla this.deleteAll(); //llamamos al menu incMenu(); //DIBUJAMOS LA VENTANA try{ LabelField title = new LabelField("Info Berry", LabelField.FIELD_HCENTER | LabelField.USE_ALL_HEIGHT ); setTitle(title); _ventana.add(new LabelField("Información del Dispositivo", LabelField.FIELD_HCENTER |LabelField.RIGHT | LabelField.USE_ALL_HEIGHT | LabelField.NON_FOCUSABLE )); _ventana.add(new SeparatorField()); _ventana.add(new SeparatorField()); txthorizontal modelo = new txthorizontal("Modelo:", DeviceInfo.getDeviceName()); modelo.setBackground(gris); _ventana.add(modelo); txthorizontal pin = new txthorizontal("PIN:" , Integer.toHexString(DeviceInfo.getDeviceId()).toUpperCase()); pin.setBackground(cyan); _ventana.add(pin); txthorizontal imeid = new txthorizontal("IMEID:" , GPRSInfo.imeiToString(GPRSInfo.getIMEI())); imeid.setBackground(gris); _ventana.add(imeid); txthorizontal version= new txthorizontal("SO Versión:" , DeviceInfo.getSoftwareVersion()); version.setBackground(cyan); _ventana.add(version); txthorizontal plataforma= new txthorizontal("SO Plataforma:" , DeviceInfo.getPlatformVersion()); plataforma.setBackground(gris); _ventana.add(plataforma); txthorizontal numero= new txthorizontal("Numero Telefonico: " , "Hay que firmar"); numero.setBackground(cyan); _ventana.add(numero); _ventana.add(new SeparatorField()); _ventana.add(new SeparatorField()); _ventana.add(new LabelField("Memoria", LabelField.FIELD_HCENTER | LabelField.USE_ALL_HEIGHT | LabelField.NON_FOCUSABLE)); _ventana.add(new SeparatorField()); txthorizontal totalm= new txthorizontal("Memoria app Total:" , mmemoria(tmemoria) + " Mb"); totalm.setBackground(gris); _ventana.add(totalm); txthorizontal disponiblem= new txthorizontal("Memoria app Disponible:" , mmemoria(fmemoria) + " Mb"); disponiblem.setBackground(cyan); _ventana.add(disponiblem); ///txthorizontal estadoram = new txthorizontal("Memoria RAM:" , mmemoria(prueba) + " Mb"); //estadoram.setBackground(gris); //add(estadoram); _ventana.add(new SeparatorField()); _ventana.add(new SeparatorField()); this.add(_ventana); }catch(Exception e){ Dialog.alert("Excepción en clase vtnprincipal: " + e.toString()); } } //DIBUJAMOS EL MENU private void incMenu() { MenuItem.separator(30); mnubateria = new MenuItem("Bateria",40, 10) { public void run() { bateria(); } }; mnuestado = new MenuItem("Estado de Red", 50, 10) { public void run() { estado(); } }; mnuacerca = new MenuItem("Acerca de..", 60, 10) { public void run() { acerca(); } }; MenuItem.separator(70); }; // public void makeMenu(Menu menu, int instance) { if (!menu.isDisplayed()) { menu.deleteAll(); menu.add(MenuItem.separator(30)); menu.add(mnubateria); menu.add(mnuestado); menu.add(mnuacerca); menu.add(MenuItem.separator(60)); } } public void bateria(){ padre.vtnbateria.incventana(); padre.pushScreen(padre.vtnbateria); } public void estado(){ padre.vtnestado.incventana(); padre.pushScreen(padre.vtnestado); } public void acerca(){ padre.vtnacerca.incventana(); padre.pushScreen(padre.vtnacerca); } public boolean onClose(){ Dialog.alert("Hasta Luego"); System.exit(0); return true; } public double mmemoria(double x) { if ( x > 0 ) return Math.floor(x * 100) / 100; else return Math.ceil(x * 100) / 100; } }

    Read the article

  • Realtime Twitter Replies?

    - by ejunker
    I have created Twitter bots for many geographic locations. I want to allow users to @-reply to the Twitter bot with commands and then have the bot respond with the results. I would like to have the bot reply to the user as quickly as possible (realtime). Apparently, Twitter used to have an XMPP/Jabber interface that would provide this type of realtime feed of replies but it was shut down. As I see it my options are to use one of the following: REST API This would involve polling every X minutes for each bot. The problem with this is that it is not realtime and each Twitter account would have to be polled. Search API The search API does allow specifying a "-to" parameter in the search and replies to all bots could be aggregated in a search such as "-to bot1 OR -to bot2...". Though if you have hundreds of bots then the search string would get very long and probably exceed the maximum length of a GET request. Streaming API The streaming API looks very promising as it provides realtime results. The API allows you to specify a follow and track parameters. follow is not useful as the bot does not know who will be sending it commands. track allows you to specify keywords to track. This could possibly work by creating a daemon process that connects to the Streaming API and tracks all references to the bot's names. Once again since there are lots of bots to track the length and complexity of the query may be an issue. Another idea would be to track a special hashtag such as #botcommand and then a user could send a command using this syntax @bot1 weather #botcommand. Then by using the Streaming API to track all references to #botcommand would give you a realtime stream of all the commands. Further parsing could then be done to determine which bot to send the command to. Third-party service Are there any third-party companies that have access to the Twitter firehouse and offer realtime data? I haven't investigated these, but here are a few that I have found: Gnip Tweet.IM excla.im TwitterSpy - seems to use polling, not realtime I'm leaning towards using the Streaming API. Is there a better way to get near realtime @-replies for many (hundreds) of Twitter accounts?

    Read the article

  • Error using paho-mqtt in App Engine Python App

    - by calumb
    I am trying to right a Google Cloud Platform app in python with Flask that makes an MQTT connection. I have included the paho python library by doing pip install paho-mqtt -t libs/. However, when I try to run the app, even if I don't try to connect to MQTT. I get a weird error about IP address checking: RuntimeError: error('illegal IP address string passed to inet_pton',) It seems something in the remote_socket lib is causing a problem. Is this a security issue? Is there someway to disable it? Relevant code: from flask import Flask import paho.mqtt.client as mqtt import logging as logger app = Flask(__name__) # Note: We don't need to call run() since our application is embedded within # the App Engine WSGI application server. #callback to print out connection status def on_connect(mosq, obj, rc): logger.info('on_connect') if rc == 0: logger.info("Connected") mqttc.subscribe('test', 0) else: logger.info(rc) def on_message(mqttc, obj, msg): logger.info(msg.topic+" "+str(msg.qos)+" "+str(msg.payload)) mqttc = mqtt.Client("mqttpy") mqttc.on_message = on_message mqttc.on_connect = on_connect As well as full stack trace: ERROR 2014-06-03 15:14:57,285 wsgi.py:262] Traceback (most recent call last): File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 239, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 298, in _LoadHandler handler, path, err = LoadObject(self._handler) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 84, in LoadObject obj = __import__(path[0]) File "/Users/cbarnes/code/ignite/tank-demo/appengine-flask-demo/main.py", line 24, in <module> mqttc = mqtt.Client("mqtthtpp") File "/Users/cbarnes/code/ignite/tank-demo/appengine-flask-demo/lib/paho/mqtt/client.py", line 403, in __init__ self._sockpairR, self._sockpairW = _socketpair_compat() File "/Users/cbarnes/code/ignite/tank-demo/appengine-flask-demo/lib/paho/mqtt/client.py", line 255, in _socketpair_compat listensock.bind(("localhost", 0)) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/dist27/socket.py", line 222, in meth return getattr(self._sock,name)(*args) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 668, in bind self._SetProtoFromAddr(request.mutable_proxy_external_ip(), address) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 632, in _SetProtoFromAddr proto.set_packed_address(self._GetPackedAddr(address)) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 627, in _GetPackedAddr AI_NUMERICSERV|AI_PASSIVE): File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 338, in getaddrinfo canonical=(flags & AI_CANONNAME)) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 211, in _Resolve canon, aliases, addresses = _ResolveName(name, families) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/remote_socket/_remote_socket.py", line 229, in _ResolveName apiproxy_stub_map.MakeSyncCall('remote_socket', 'Resolve', request, reply) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall return stubmap.MakeSyncCall(service, call, request, response) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall rpc.CheckSuccess() File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl self.request, self.response) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall self._MakeRealSyncCall(service, call, request, response) File "/Users/cbarnes/google-cloud-sdk/platform/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 234, in _MakeRealSyncCall raise pickle.loads(response_pb.exception()) RuntimeError: error('illegal IP address string passed to inet_pton',) INFO 2014-06-03 15:14:57,291 module.py:639] default: "GET / HTTP/1.1" 500 - Thanks!

    Read the article

  • Recursive Maze in Java

    - by Api
    I have written a short Java code for solving a simple maze problem to go from S to G. I do not understand where the problem is going wrong. import java.util.Scanner; public class tester { static char [][] grid={ {'.','.'}, {'.','.'}, {'S','G'}, }; static int a=2; static int b=2; static boolean findpath(int x, int y) { if((x > grid.length-1) || (y > grid[0].length-1) || (x < 0 || y < 0)) { return false; } else if(x==a && y==b){ return true; } else if (findpath(x,y-1) == true){ return true; } else if (findpath(x+1,y) == true){ return true; } else if (findpath(x,y+1) == true) { return true; } else if (findpath(x-1,y) == true){ return true; } return false; } public static void main(String[] args){ boolean result=findpath(2,0); System.out.print(result); } } I am giving the starting position directly and goal is defined in a & b. Do help.

    Read the article

  • CodeIgniter: problem using foreach in view

    - by krike
    I have a model and controller who gets some data from my database and returns the following array Array ( [2010] => Array ( [year] => 2010 [months] => Array ( [0] => stdClass Object ( [sales] => 2 [month] => Apr ) [1] => stdClass Object ( [sales] => 1 [month] => Nov ) ) ) [2011] => Array ( [year] => 2011 [months] => Array ( [0] => stdClass Object ( [sales] => 1 [month] => Nov ) ) ) ) It shows exactly what it should show but the key's have different names so I have no idea on how to loop through the years using foreach in my view. Arrays is something I'm not that good at yet :( this is the controller if you need to know: function analytics() { $this->load->model('admin_model'); $analytics = $this->admin_model->Analytics(); foreach ($analytics as $a): $data[$a->year]['year'] = $a->year; $data[$a->year]['months'] = $this->admin_model->AnalyticsMonth($a->year); endforeach; echo"<pre style='text-align:left;'>"; print_r($data); echo"</pre>"; $data['main_content'] = 'analytics'; $this->load->view('template_admin', $data); }//end of function categories()

    Read the article

  • Grails ClassNotFoundException com.google.common.collect.Maps

    - by user1734199
    I need some help, I am trying to make an controller using Google Analytics API, but using: statsController.groovy /**************************************************************/ import com.google.gdata.client.analytics.AnalyticsService class StatsController { def myService def stats(){ myService = new AnalyticsService("example-App"); } } /************************************************************/ error Message: ClassNotFoundException occurred when processing request: [...] com.google.common.collect.Maps I ve tryed adding to the buildpath the "gdata.analytics*.jar", "google-collect-1.0.jar", "guava.jar" and "jsr305.jar" but without results, the error always says that i described or NotDefClassError ocurred when processing request: [...] com.google.gdata.client.analytics.AnalyticsService. I need to solve.

    Read the article

  • Best practice to detect iPhone app only access for web services?

    - by Gaius Parx
    I am developing an iPhone app together with web services. The iPhone app will use GET or POST to retrieve data from the web services such as http://www.myserver.com/api/top10songs.json to get data for top ten songs for example. There is no user account and password for the iPhone app. What is the best practice to ensure that only my iPhone app have access to the web API http://www.myserver.com/api/top10songs.json? iPhone SDK's UIDevice uniqueueIdentifier is not sufficient as anyone can fake the device id as parameter making the API call using wget, curl or web browsers. The web services API will not be published. The data of the web services is not secret and private, I just want to prevent abuse as there are also API to write some data to the server such as usage log.

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >