Search Results

Search found 17016 results on 681 pages for 'ruby debug'.

Page 398/681 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Android "java.lang.noclassdeffounderror" exception

    - by wpbnewbie
    Hello, I have a android webservice client application. I am trying to use the java standard WS library support. I have stripped the application down to the minimum, as shown below, to try and isolate the issue. Below is the applicaiton, package fau.edu.cse; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class ClassMap extends Activity { TextView displayObject; @Override public void onCreate(Bundle savedInstanceState) { // Build Screen Display String String screenString = "Program Started\n\n"; // Set up the display super.onCreate(savedInstanceState); setContentView(R.layout.main); displayObject = (TextView)findViewById(R.id.TextView01); screenString = screenString + "Inflate Disaplay\n\n"; try { // Set up Soap Service TempConvertSoap service = new TempConvert().getTempConvertSoap(); // Successful Soap Object Build screenString = screenString + "SOAP Object Correctly Build\n\n"; // Display Response displayObject.setText(screenString); } catch(Throwable e){ e.printStackTrace(); displayObject.setText(screenString +"Try Error...\n" + e.toString()); } } } The classes tempConvert and tempConvertSoap are in the package fau.edu.cse. I have included the java SE javax libraries in the java build pasth. When the android application tries to create the "service" object I get a "java.lang.noclassdeffounderror" exception. The two classes tempConvertSoap and TempConvet() are generated by wsimport. I am also using several libraries from javax.jws.. and javax.xml.ws.. Of course the application compiles without error and loads correctly. I know the application is running becouse my "try/catch" routine is successfully catching the error and printing it out. Here is what is in the logcat says (notice that it cannot find TempConvert), 06-12 22:58:39.340: WARN/dalvikvm(200): Unable to resolve superclass of Lfau/edu/cse/TempConvert; (53) 06-12 22:58:39.340: WARN/dalvikvm(200): Link of class 'Lfau/edu/cse/TempConvert;' failed 06-12 22:58:39.340: ERROR/dalvikvm(200): Could not find class 'fau.edu.cse.TempConvert', referenced from method fau.edu.cse.ClassMap.onCreate 06-12 22:58:39.340: WARN/dalvikvm(200): VFY: unable to resolve new-instance 21 (Lfau/edu/cse/TempConvert;) in Lfau/edu/cse/ClassMap; 06-12 22:58:39.340: DEBUG/dalvikvm(200): VFY: replacing opcode 0x22 at 0x0027 06-12 22:58:39.340: DEBUG/dalvikvm(200): Making a copy of Lfau/edu/cse/ClassMap;.onCreate code (252 bytes) 06-12 22:58:39.490: DEBUG/dalvikvm(30): GC freed 2 objects / 48 bytes in 273ms 06-12 22:58:39.530: DEBUG/ddm-heap(119): Got feature list request 06-12 22:58:39.620: WARN/Resources(200): Converting to string: TypedValue{t=0x12/d=0x0 a=2 r=0x7f050000} 06-12 22:58:39.620: WARN/System.err(200): java.lang.NoClassDefFoundError: fau.edu.cse.TempConvert 06-12 22:58:39.830: WARN/System.err(200): at fau.edu.cse.ClassMap.onCreate(ClassMap.java:26) 06-12 22:58:39.830: WARN/System.err(200): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1047) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2459) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2512) 06-12 22:58:39.830: WARN/System.err(200): at android.app.ActivityThread.access$2200(ActivityThread.java:119) 06-12 22:58:39.880: WARN/System.err(200): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1863) 06-12 22:58:39.880: WARN/System.err(200): at android.os.Handler.dispatchMessage(Handler.java:99) 06-12 22:58:39.880: WARN/System.err(200): at android.os.Looper.loop(Looper.java:123) 06-12 22:58:39.880: WARN/System.err(200): at android.app.ActivityThread.main(ActivityThread.java:4363) 06-12 22:58:39.880: WARN/System.err(200): at java.lang.reflect.Method.invokeNative(Native Method) 06-12 22:58:39.880: WARN/System.err(200): at java.lang.reflect.Method.invoke(Method.java:521) 06-12 22:58:39.880: WARN/System.err(200): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 06-12 22:58:39.880: WARN/System.err(200): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 06-12 22:58:39.880: WARN/System.err(200): at dalvik.system.NativeStart.main(Native Method) ...bla...bla...bla It would be great if someone just had an answer, however I am looking at debug strategies. I have taken this same application and created a standard java client application and it works fine -- of course with all of the android stuff taken out. What would be a good debug strategy? What methods and techniques would you recommend I try and isolate the problem? I am thinking that there is some sort of Dalvik VM incompatibility that is causing the TempConvert class not to load. TempConvert is an interface class that references a lot of very tricky webservice attributes. Any help with debug strategies would be gladly appreciated. Thanks for the help, Steve

    Read the article

  • Learn MacRuby or Objective-C?

    - by MaxD
    Well, my fisrt question was a bit too general so i ll try again and hope this one is better. The way i see it is: Ruby-MacRuby or IronRuby or Rails Obj-c-Mac Development So Ruby has clearly more potential in desktop and web platforms and now with MacRuby, OSX native (and commercial) apps are on the way. If i get it wrong please correct me. For me that i will do a fresh start should i go with the modern Ruby or start learning c+obj-c? Will a newcomer benefit much (in learning & coding time, frustration, complexity) by learning/using macruby for osx apps rather objective-c? Or its pretty much the same? I hope some day to hang around here and help others.

    Read the article

  • HttpWebRequest is extremely slow!

    - by Earlz
    Hello, I am using an open source library to connect to my webserver. I was concerned that the webserver was going extremely slow and then I tried doing a simple test in Ruby and I got these results Ruby program: 2.11seconds for 100 HTTP GETs C# library: 20.81seconds for 100 HTTP GETs I have profiled and found the problem to be this function: private HttpWebResponse GetRawResponse(HttpWebRequest request) { HttpWebResponse raw = null; try { raw = (HttpWebResponse)request.GetResponse(); //This line! } catch (WebException ex) { if (ex.Response is HttpWebResponse) { raw = ex.Response as HttpWebResponse; } } return raw; } The marked line is takes over 1 second to complete by itself while the ruby program making 1 request takes .3 seconds. I am also doing all of these tests on 127.0.0.1, so network bandwidth is not an issue. What could be causing this huge slow down?

    Read the article

  • Are All Dynamic Languages Typo-friendly?

    - by yar
    With Java on one side and Ruby/Groovy on the other, I know that in the second camp I'm free to make typos which will not get caught until run-time. Is this true of all dynamically-typed languages? Edit: I've been asked to elaborate on the type of typo. In Ruby and in Groovy, you can assign to a variable with an accidental name that is never read. You can call methods that don't exist (obviously your tests should catch this, it's been said). You can refer to classes that don't exist, etc. etc. Basically any valid syntax, even with typographical errors, is valid in both Ruby and Groovy.

    Read the article

  • What's the easiest way to get the result of an HTTP GET request in using URL in JRuby

    - by sipwiz
    I'm attempting to build a Tropo Ruby application and I need to retrieve the result of an HTTPS GET. The Tropo platform doesn't have the httpclient Ruby gem so I can't use that. The Ruby engine used is JRuby so a suggestion has been to make use of the Java URL class to do the request. I've played around with it a little bit and I seem to be able to create the URL object ok but am now struggling with how to get the results of executing the request. How do I do it? javaURL = java.net.URL.new svcURL transferResult = javaURL.getContent()

    Read the article

  • Refering to javascript instance methods with a pound/hash sign

    - by Josh
    This question is similar to http://stackoverflow.com/questions/736120/why-are-methods-in-ruby-documentation-preceded-by-a-pound-sign I understand why in Ruby instance methods are proceeded with a pound sign, helping to differentiate talking about SomeClass#someMethod from SomeObject.someMethod and allowing rdoc to work. And I understand that the authors of PrototypeJS admire Ruby (with good reason) and so they use the hash mark convention in their documentation. My question is: is this a standard practice amongst JavaScript developers or is it just Prototype developers who do this? Asked another way, is it proepr for me to refer to instance methods in comments/documentation as SomeClass#someMethod? Or should my documentation refer to `SomeClass.someMethod?

    Read the article

  • Getting a NullPointerException at seemingly random intervals, not sure why

    - by Miles
    I'm running an example from a Kinect library for Processing (http://www.shiffman.net/2010/11/14/kinect-and-processing/) and sometimes get a NullPointerException pointing to this line: int rawDepth = depth[offset]; The depth array is created in this line: int[] depth = kinect.getRawDepth(); I'm not exactly sure what a NullPointerException is, and much googling hasn't really helped. It seems odd to me that the code compiles 70% of the time and returns the error unpredictably. Could the hardware itself be affecting it? Here's the whole example if it helps: // Daniel Shiffman // Kinect Point Cloud example // http://www.shiffman.net // https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing import org.openkinect.*; import org.openkinect.processing.*; // Kinect Library object Kinect kinect; float a = 0; // Size of kinect image int w = 640; int h = 480; // We'll use a lookup table so that we don't have to repeat the math over and over float[] depthLookUp = new float[2048]; void setup() { size(800,600,P3D); kinect = new Kinect(this); kinect.start(); kinect.enableDepth(true); // We don't need the grayscale image in this example // so this makes it more efficient kinect.processDepthImage(false); // Lookup table for all possible depth values (0 - 2047) for (int i = 0; i < depthLookUp.length; i++) { depthLookUp[i] = rawDepthToMeters(i); } } void draw() { background(0); fill(255); textMode(SCREEN); text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16); // Get the raw depth as array of integers int[] depth = kinect.getRawDepth(); // We're just going to calculate and draw every 4th pixel (equivalent of 160x120) int skip = 4; // Translate and rotate translate(width/2,height/2,-50); rotateY(a); for(int x=0; x<w; x+=skip) { for(int y=0; y<h; y+=skip) { int offset = x+y*w; // Convert kinect data to world xyz coordinate int rawDepth = depth[offset]; PVector v = depthToWorld(x,y,rawDepth); stroke(255); pushMatrix(); // Scale up by 200 float factor = 200; translate(v.x*factor,v.y*factor,factor-v.z*factor); // Draw a point point(0,0); popMatrix(); } } // Rotate a += 0.015f; } // These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html float rawDepthToMeters(int depthValue) { if (depthValue < 2047) { return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161)); } return 0.0f; } PVector depthToWorld(int x, int y, int depthValue) { final double fx_d = 1.0 / 5.9421434211923247e+02; final double fy_d = 1.0 / 5.9104053696870778e+02; final double cx_d = 3.3930780975300314e+02; final double cy_d = 2.4273913761751615e+02; PVector result = new PVector(); double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue); result.x = (float)((x - cx_d) * depth * fx_d); result.y = (float)((y - cy_d) * depth * fy_d); result.z = (float)(depth); return result; } void stop() { kinect.quit(); super.stop(); } And here are the errors: processing.app.debug.RunnerException: NullPointerException at processing.app.Sketch.placeException(Sketch.java:1543) at processing.app.debug.Runner.findException(Runner.java:583) at processing.app.debug.Runner.reportException(Runner.java:558) at processing.app.debug.Runner.exception(Runner.java:498) at processing.app.debug.EventThread.exceptionEvent(EventThread.java:367) at processing.app.debug.EventThread.handleEvent(EventThread.java:255) at processing.app.debug.EventThread.run(EventThread.java:89) Exception in thread "Animation Thread" java.lang.NullPointerException at org.openkinect.processing.Kinect.enableDepth(Kinect.java:70) at PointCloud.setup(PointCloud.java:48) at processing.core.PApplet.handleDraw(PApplet.java:1583) at processing.core.PApplet.run(PApplet.java:1503) at java.lang.Thread.run(Thread.java:637)

    Read the article

  • Pure-JavaScript projects in NetBeans?

    - by Matt Zukowski
    This seems like it ought to be obvious, yet I can't figure it out. I do a lot of JavaScript coding, and I really like NetBeans. Unfortunately I can't figure out how to create a "JavaScript" project in NetBeans. If I go to File - New Project, my only options are "Java", "Ruby", and "NetBeans Modules". I don't want any of these. My project consists mostly of JavaScript, with a little bit of CSS. I ususally just end up creating a "Ruby" project, but this seems retarded, since I don't actually have any Ruby code. Why isn't there an option to create a "JavaScript" or "Web" project, or at least a "Generic" project that doesn't revolve around a specific language? Am I missing something here?

    Read the article

  • Brew install pyqt mavericks

    - by user3722876
    I have some trouble installing PyQt on my Mac. HOMEBREW_VERSION: 0.9.5 ORIGIN: https://github.com/Homebrew/homebrew.git HEAD: d8af29d63a5b94ffee863788210c3a895315035f HOMEBREW_PREFIX: /usr/local HOMEBREW_CELLAR: /usr/local/Cellar CPU: quad-core 64-bit sandybridge OS X: 10.9.3-x86_64 Xcode: 5.1.1 CLT: 5.1.0.0.1.1396320587 Clang: 5.1 build 503 MacPorts/Fink: /opt/local/bin/port X11: 2.7.6 => /opt/X11 System Ruby: 2.0.0-451 Perl: /usr/bin/perl Python: /opt/local/bin/python => /opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/python2.7 Ruby: /usr/bin/ruby sip installation ok qt installation ok brew install pyqt => make 1 error generated. make[1]: *** [qtlib.o] Error 1 1 error generated. make[1]: *** [siplib.o] Error 1 make: *** [all] Error 2 No idea what's happening...

    Read the article

  • Qt C++ signals and slots did not fire

    - by Xegara
    I have programmed Qt a couple of times already and I really like the signals and slots feature. But now, I guess I'm having a problem when a signal is emitted from one thread, the corresponding slot from another thread is not fired. The connection was made in the main program. This is also my first time to use Qt for ROS which uses CMake. The signal fired by the QThread triggered their corresponding slots but the emitted signal of my class UserInput did not trigger the slot in tflistener where it supposed to. I have tried everything I can. Any help? The code is provided below. Main.cpp #include <QCoreApplication> #include <QThread> #include "userinput.h" #include "tfcompleter.h" int main(int argc, char** argv) { QCoreApplication app(argc, argv); QThread *thread1 = new QThread(); QThread *thread2 = new QThread(); UserInput *input1 = new UserInput(); TfCompleter *completer = new TfCompleter(); QObject::connect(input1, SIGNAL(togglePause2()), completer, SLOT(toggle())); QObject::connect(thread1, SIGNAL(started()), completer, SLOT(startCounting())); QObject::connect(thread2, SIGNAL(started()), input1, SLOT(start())); completer->moveToThread(thread1); input1->moveToThread(thread2); thread1->start(); thread2->start(); app.exec(); return 0; } What I want to do is.. There are two seperate threads. One thread is for the user input. When the user enters [space], the thread emits a signal to toggle the boolean member field of the other thread. The other thread 's task is to just continue its process if the user wants it to run, otherwise, the user does not want it to run. I wanted to grant the user to toggle the processing anytime that he wants, that's why I decided to bring them into seperate threads. The following codes are the tflistener and userinput. tfcompleter.h #ifndef TFCOMPLETER_H #define TFCOMPLETER_H #include <QObject> #include <QtCore> class TfCompleter : public QObject { Q_OBJECT private: bool isCount; public Q_SLOTS: void toggle(); void startCounting(); }; #endif tflistener.cpp #include "tfcompleter.h" #include <iostream> void TfCompleter::startCounting() { static uint i = 0; while(true) { if(isCount) std::cout << i++ << std::endl; } } void TfCompleter::toggle() { // isCount = ~isCount; std::cout << "isCount " << std::endl; } UserInput.h #ifndef USERINPUT_H #define USERINPUT_H #include <QObject> #include <QtCore> class UserInput : public QObject { Q_OBJECT public Q_SLOTS: void start(); // Waits for the keypress from the user and emits the corresponding signal. public: Q_SIGNALS: void togglePause2(); }; #endif UserInput.cpp #include "userinput.h" #include <iostream> #include <cstdio> // Implementation of getch #include <termios.h> #include <unistd.h> /* reads from keypress, doesn't echo */ int getch(void) { struct termios oldattr, newattr; int ch; tcgetattr( STDIN_FILENO, &oldattr ); newattr = oldattr; newattr.c_lflag &= ~( ICANON | ECHO ); tcsetattr( STDIN_FILENO, TCSANOW, &newattr ); ch = getchar(); tcsetattr( STDIN_FILENO, TCSANOW, &oldattr ); return ch; } void UserInput::start() { char c = 0; while (true) { c = getch(); if (c == ' ') { Q_EMIT togglePause2(); std::cout << "SPACE" << std::endl; } c = 0; } } Here is the CMakeLists.txt. I just placed it here also since I don't know maybe the CMake has also a factor here. CMakeLists.txt ############################################################################## # CMake ############################################################################## cmake_minimum_required(VERSION 2.4.6) ############################################################################## # Ros Initialisation ############################################################################## include($ENV{ROS_ROOT}/core/rosbuild/rosbuild.cmake) rosbuild_init() set(CMAKE_AUTOMOC ON) #set the default path for built executables to the "bin" directory set(EXECUTABLE_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/bin) #set the default path for built libraries to the "lib" directory set(LIBRARY_OUTPUT_PATH ${PROJECT_SOURCE_DIR}/lib) # Set the build type. Options are: # Coverage : w/ debug symbols, w/o optimization, w/ code-coverage # Debug : w/ debug symbols, w/o optimization # Release : w/o debug symbols, w/ optimization # RelWithDebInfo : w/ debug symbols, w/ optimization # MinSizeRel : w/o debug symbols, w/ optimization, stripped binaries #set(ROS_BUILD_TYPE Debug) ############################################################################## # Qt Environment ############################################################################## # Could use this, but qt-ros would need an updated deb, instead we'll move to catkin # rosbuild_include(qt_build qt-ros) rosbuild_find_ros_package(qt_build) include(${qt_build_PACKAGE_PATH}/qt-ros.cmake) rosbuild_prepare_qt4(QtCore) # Add the appropriate components to the component list here ADD_DEFINITIONS(-DQT_NO_KEYWORDS) ############################################################################## # Sections ############################################################################## #file(GLOB QT_FORMS RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} ui/*.ui) #file(GLOB QT_RESOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} resources/*.qrc) file(GLOB_RECURSE QT_MOC RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS include/rgbdslam_client/*.hpp) #QT4_ADD_RESOURCES(QT_RESOURCES_CPP ${QT_RESOURCES}) #QT4_WRAP_UI(QT_FORMS_HPP ${QT_FORMS}) QT4_WRAP_CPP(QT_MOC_HPP ${QT_MOC}) ############################################################################## # Sources ############################################################################## file(GLOB_RECURSE QT_SOURCES RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} FOLLOW_SYMLINKS src/*.cpp) ############################################################################## # Binaries ############################################################################## rosbuild_add_executable(rgbdslam_client ${QT_SOURCES} ${QT_MOC_HPP}) #rosbuild_add_executable(rgbdslam_client ${QT_SOURCES} ${QT_RESOURCES_CPP} ${QT_FORMS_HPP} ${QT_MOC_HPP}) target_link_libraries(rgbdslam_client ${QT_LIBRARIES})

    Read the article

  • macport selfupdate not working

    - by eistrati
    macbookpro:~ eistrati$ port -v MacPorts 2.1.2 macbookpro:~ eistrati$ xcodebuild -version Xcode 4.5.2 Build version 4G2008a macbookpro:~ eistrati$ sudo port -d selfupdate DEBUG: Copying /Users/eistrati/Library/Preferences/com.apple.dt.Xcode.plist to /opt/local/var/macports/home/Library/Preferences DEBUG: MacPorts sources location: /opt/local/var/macports/sources/rsync.macports.org/release/tarballs ---> Updating MacPorts base sources using rsync rsync: failed to connect to rsync.macports.org: Connection refused (61) rsync error: error in socket IO (code 10) at /SourceCache/rsync/rsync-42/rsync/clientserver.c(105) [receiver=2.6.9] Command failed: /usr/bin/rsync -rtzv --delete-after rsync://rsync.macports.org/release/tarballs/base.tar /opt/local/var/macports/sources/rsync.macports.org/release/tarballs Exit code: 10 DEBUG: Error synchronizing MacPorts sources: command execution failed while executing "macports::selfupdate [array get global_options] base_updated" Error: /opt/local/bin/port: port selfupdate failed: Error synchronizing MacPorts sources: command execution failed Ideas? Please help!

    Read the article

  • PHP Warning: PHP Startup: Unable to load dynamic library php_mysql.dll, Mac 10.6, Apache 2.2, php 5

    - by munchybunch
    I'm trying to use the PHP CLI, and when I enter something like php test.php in the command line it returns: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php/extensions/no-debug-non-zts-20090626/php_mysql.dll' - dlopen(/usr/lib/php/extensions/no-debug-non-zts-20090626/php_mysql.dll, 9): image not found in Unknown on line 0 something test.php contains: <?php echo 'something'; ?> I checked /usr/lib/php/extensions/no-debug-non-zts-20090626/, and as expected the .dll file isn't there. I'm a complete beginner when it comes to this - what is happening, and how can I fix it? A search of my system for "php_msyql.dll" reveals nothing. Does it have to do with how I compiled it? I don't have the original version of php that came with the mac, I think - I may have reinstalled it somewhere along the way. Any help would be appreciated!

    Read the article

  • Compiling zip component for PHP 5.2.11 in MAMP PRO

    - by Zlatoroh
    Helo I installed MAMP PRO on my Macbook Pro (10.6) some time ago. Now I would like to use zip functions in php. I found that I must add zip.so to my extension folder and edited php.ini. On my computer I have two different versions of PHP one in MAMP folder and other in user/lib which was pre-installed on my system. Now I wish to compile my zip library for MAMP version. I got zip sources for my version of PHP then in terminal called function /Applications/MAMP/bin/php5/bin/phpize so it uses mamp php version ./configure make then I moved compile zip.so to extensions/no-debug-non-zts-20060613. When MAMP is launched it returns this error: [11-Apr-2010 16:33:27] PHP Warning: PHP Startup: zip: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 Can some body explain to me how to do this the right way.

    Read the article

  • Can I set up a 'Deny from x' that overrides other confs for debugging?

    - by Nick T
    I'm currently working on developing/deploying a Django application on Apache and am often fiddling with the debug settings which alter how Django accepts connections, ignoring or using ALLOWED_HOSTS. If DEBUG is False, it uses them, which is handy to keep up some walls around my construction site. However, the useful info it spits out when True is quite nice. I'm currently just using an SSH tunnel and just allowing localhost when DEBUG is False, but how can I keep everyone out without relying on the aforementioned ALLOWED_HOSTS? Editing the httpd.conf file which is in source control is a bit irritating; I've accidentally committed a few botched configs.

    Read the article

  • Compiling zip component for PHP 5.2.11 in MAMP PRO

    - by Zlatoroh
    I installed MAMP PRO on my Macbook Pro (10.6) some time ago. Now I would like to use zip functions in php. I found that I must add zip.so to my extension folder and edited php.ini. On my computer I have two different versions of PHP one in MAMP folder and other in user/lib which was pre-installed on my system. Now I wish to compile my zip library for MAMP version. I got zip sources for my version of PHP then in terminal called function /Applications/MAMP/bin/php5/bin/phpize so it uses mamp php version ./configure make then I moved compile zip.so to extensions/no-debug-non-zts-20060613. When MAMP is launched it returns this error [11-Apr-2010 16:33:27] PHP Warning: PHP Startup: zip: Unable to initialize module Module compiled with module API=20090626, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 Can somebody explain to me how to do this the right way.

    Read the article

  • Forward Hostname via my router to another PC for debugging

    - by Markive
    Hi, My web service runs on for example: http://mydomain.com/mywebservice.asmx. This works great, but I have a PDA application which I want to debug it synchronising through this web service. Currently the only way I can do this is to debug the webservice running on the actual server which is far from ideal. What I would like to do is for any device connecting on my wireless network, if it requests mywebservice.asmx for this to forward the request to my development PC and for IIS to then handle the request and allow me to debug in Visual Studio. So device on the network that requests the hostname: mywebservice.asmx will his this PC.. I am at a loss to set this up on my router (Zoom ADSL X6), this is massively out of my scope but any help would be much appreciated

    Read the article

  • Simple HTTP server that will send the same file for all requests?

    - by Rory McCann
    I need to debug a XML-RPC application, which sends XML replies over HTTP. I have a sample XML reply (i.e. data from the server, sent to the client that isn't working), I'd like to debug my application. Ideally I'd like a simple HTTP server that will serve one file in reply to all requests. Someone requests /? Send them this file. Someone makes a post to /server/page.php with a certain cookie? Just send them this file. I don't care about multithreading, or security. I will only need to use this for a few hours to debug. I have root on the machine. i.e. I'm hoping there's something as easy to use as this: simple_http_server -p 12445 -f my_test_file I'm aware of python's SimpleHTTPServer module, but I'm not sure how to make it work in this case.

    Read the article

  • How is the PHP extensions/modules file structure logic based?

    - by dotpointer
    I'm trying to configure/build PHP 5.3.10 on Linux/Slackware 12 but the extensions appear in the wrong directory when I run make install. In the php.ini file is the extension dir defined: /usr/lib/php/extensions Problem is that when I run "make install" the newly built extensions are copied to a subfolder in extensions directory: /usr/lib/php/extensions/no-debug-non-zts-20090626 What am I supposed to do with this... copy the files down from the no-debug-non-zts-20090626 directory into the extensions directory, create symlinks from extensions to the modules in the no-debug-non-zts-20090626 directory (which will take a lot of time) or what? (I know I can do any of them, but I want to know the correct way...)

    Read the article

  • Django - no module named app

    - by Koran
    Hi, I have been trying to get an application written in django working - but it is not working at all. I have been working on for some time too - and it is working on dev-server perfectly. But I am unable to put in the production env (apahce). My project name is apstat and the app name is basic. I try to access it as following Blockquote http://hostname/apstat But it shows the following error: MOD_PYTHON ERROR ProcessId: 6002 Interpreter: 'domU-12-31-39-06-DD-F4.compute-1.internal' ServerName: 'domU-12-31-39-06-DD-F4.compute-1.internal' DocumentRoot: '/home/ubuntu/server/' URI: '/apstat/' Location: '/apstat' Directory: None Filename: '/home/ubuntu/server/apstat/' PathInfo: '' Phase: 'PythonHandler' Handler: 'django.core.handlers.modpython' Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1537, in HandlerDispatch default=default_handler, arg=req, silent=hlist.silent) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1229, in _process_target result = _execute_target(config, req, object, arg) File "/usr/lib/python2.6/dist-packages/mod_python/importer.py", line 1128, in _execute_target result = object(arg) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 228, in handler return ModPythonHandler()(req) File "/usr/lib/pymodules/python2.6/django/core/handlers/modpython.py", line 201, in __call__ response = self.get_response(request) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 134, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/usr/lib/pymodules/python2.6/django/core/handlers/base.py", line 154, in handle_uncaught_exception return debug.technical_500_response(request, *exc_info) File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 40, in technical_500_response html = reporter.get_traceback_html() File "/usr/lib/pymodules/python2.6/django/views/debug.py", line 114, in get_traceback_html return t.render(c) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 178, in render return self.nodelist.render(context) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 779, in render bits.append(self.render_node(node, context)) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 81, in render_node raise wrapped TemplateSyntaxError: Caught an exception while rendering: No module named basic Original Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 71, in render_node result = node.render(context) File "/usr/lib/pymodules/python2.6/django/template/debug.py", line 87, in render output = force_unicode(self.filter_expression.resolve(context)) File "/usr/lib/pymodules/python2.6/django/template/__init__.py", line 572, in resolve new_obj = func(obj, *arg_vals) File "/usr/lib/pymodules/python2.6/django/template/defaultfilters.py", line 687, in date return format(value, arg) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 269, in format return df.format(format_string) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 175, in r return self.format('D, j M Y H:i:s O') File "/usr/lib/pymodules/python2.6/django/utils/dateformat.py", line 30, in format pieces.append(force_unicode(getattr(self, piece)())) File "/usr/lib/pymodules/python2.6/django/utils/encoding.py", line 71, in force_unicode s = unicode(s) File "/usr/lib/pymodules/python2.6/django/utils/functional.py", line 201, in __unicode_cast return self.__func(*self.__args, **self.__kw) File "/usr/lib/pymodules/python2.6/django/utils/translation/__init__.py", line 62, in ugettext return real_ugettext(message) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 286, in ugettext return do_translate(message, 'ugettext') File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 276, in do_translate _default = translation(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 194, in translation default_translation = _fetch(settings.LANGUAGE_CODE) File "/usr/lib/pymodules/python2.6/django/utils/translation/trans_real.py", line 180, in _fetch app = import_module(appname) File "/usr/lib/pymodules/python2.6/django/utils/importlib.py", line 35, in import_module __import__(name) ImportError: No module named basic My settings.py is as follows: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'apstat.basic', 'django.contrib.admin', ) If I remove the apstat.basic, it goes through, but that is not a solution. Is it something I am doing in apache? My apache - settings are - <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/ubuntu/server/ <Directory /> Options None AllowOverride None </Directory> <Directory /home/ubuntu/server/apstat> AllowOverride None Order allow,deny allow from all </Directory> <Location "/apstat"> SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE apstat.settings PythonOption django.root /home/ubuntu/server/ PythonDebug On PythonPath "['/home/ubuntu/server/'] + sys.path" </Location> </VirtualHost> I have now sat for more than a day on this. If someone can help me out, it would be very nice.

    Read the article

  • Jumbled byte array after using TcpClient and TcpListener

    - by Dylan
    I want to use the TcpClient and TcpListener to send an mp3 file over a network. I implemented a solution of this using sockets, but there were some issues so I am investigating a new/better way to send a file. I create a byte array which looks like this: length_of_filename|filename|file This should then be transmitted using the above mentioned classes, yet on the server side the byte array I read is completely messed up and I'm not sure why. The method I use to send: public static void Send(String filePath) { try { IPEndPoint endPoint = new IPEndPoint(Settings.IpAddress, Settings.Port + 1); Byte[] fileData = File.ReadAllBytes(filePath); FileInfo fi = new FileInfo(filePath); List<byte> dataToSend = new List<byte>(); dataToSend.AddRange(BitConverter.GetBytes(Encoding.Unicode.GetByteCount(fi.Name))); // length of filename dataToSend.AddRange(Encoding.Unicode.GetBytes(fi.Name)); // filename dataToSend.AddRange(fileData); // file binary data using (TcpClient client = new TcpClient()) { client.Connect(Settings.IpAddress, Settings.Port + 1); // Get a client stream for reading and writing. using (NetworkStream stream = client.GetStream()) { // server is ready stream.Write(dataToSend.ToArray(), 0, dataToSend.ToArray().Length); } } } catch (ArgumentNullException e) { Debug.WriteLine(e); } catch (SocketException e) { Debug.WriteLine(e); } } } Then on the server side it looks as follows: private void Listen() { TcpListener server = null; try { // Setup the TcpListener Int32 port = Settings.Port + 1; IPAddress localAddr = IPAddress.Parse("127.0.0.1"); // TcpListener server = new TcpListener(port); server = new TcpListener(localAddr, port); // Start listening for client requests. server.Start(); // Buffer for reading data Byte[] bytes = new Byte[1024]; List<byte> data; // Enter the listening loop. while (true) { Debug.WriteLine("Waiting for a connection... "); string filePath = string.Empty; // Perform a blocking call to accept requests. // You could also user server.AcceptSocket() here. using (TcpClient client = server.AcceptTcpClient()) { Debug.WriteLine("Connected to client!"); data = new List<byte>(); // Get a stream object for reading and writing using (NetworkStream stream = client.GetStream()) { // Loop to receive all the data sent by the client. while ((stream.Read(bytes, 0, bytes.Length)) != 0) { data.AddRange(bytes); } } } int fileNameLength = BitConverter.ToInt32(data.ToArray(), 0); filePath = Encoding.Unicode.GetString(data.ToArray(), 4, fileNameLength); var binary = data.GetRange(4 + fileNameLength, data.Count - 4 - fileNameLength); Debug.WriteLine("File successfully downloaded!"); // write it to disk using (BinaryWriter writer = new BinaryWriter(File.Open(filePath, FileMode.Append))) { writer.Write(binary.ToArray(), 0, binary.Count); } } } catch (Exception ex) { Debug.WriteLine(ex); } finally { // Stop listening for new clients. server.Stop(); } } Can anyone see something that I am missing/doing wrong?

    Read the article

  • Calling Web Service Functions Asynchronously from a Web Page

    - by SGWellens
    Over on the Asp.Net forums where I moderate, a user had a problem calling a Web Service from a web page asynchronously. I tried his code on my machine and was able to reproduce the problem. I was able to solve his problem, but only after taking the long scenic route through some of the more perplexing nuances of Web Services and Proxies. Here is the fascinating story of that journey. Start with a simple Web Service     public class Service1 : System.Web.Services.WebService    {        [WebMethod]        public string HelloWorld()        {            // sleep 10 seconds            System.Threading.Thread.Sleep(10 * 1000);            return "Hello World";        }    } The 10 second delay is added to make calling an asynchronous function more apparent. If you don't call the function asynchronously, it takes about 10 seconds for the page to be rendered back to the client. If the call is made from a Windows Forms application, the application freezes for about 10 seconds. Add the web service to a web site. Right-click the project and select "Add Web Reference…" Next, create a web page to call the Web Service. Note: An asp.net web page that calls an 'Async' method must have the Async property set to true in the page's header: <%@ Page Language="C#"          AutoEventWireup="true"          CodeFile="Default.aspx.cs"          Inherits="_Default"           Async='true'  %> Here is the code to create the Web Service proxy and connect the event handler. Shrewdly, we make the proxy object a member of the Page class so it remains instantiated between the various events. public partial class _Default : System.Web.UI.Page {    localhost.Service1 MyService;  // web service proxy     // ---- Page_Load ---------------------------------     protected void Page_Load(object sender, EventArgs e)    {        MyService = new localhost.Service1();        MyService.HelloWorldCompleted += EventHandler;          } Here is the code to invoke the web service and handle the event:     // ---- Async and EventHandler (delayed render) --------------------------     protected void ButtonHelloWorldAsync_Click(object sender, EventArgs e)    {        // blocks        ODS("Pre HelloWorldAsync...");        MyService.HelloWorldAsync();        ODS("Post HelloWorldAsync");    }    public void EventHandler(object sender, localhost.HelloWorldCompletedEventArgs e)    {        ODS("EventHandler");        ODS("    " + e.Result);    }     // ---- ODS ------------------------------------------------    //    // Helper function: Output Debug String     public static void ODS(string Msg)    {        String Out = String.Format("{0}  {1}", DateTime.Now.ToString("hh:mm:ss.ff"), Msg);        System.Diagnostics.Debug.WriteLine(Out);    } I added a utility function I use a lot: ODS (Output Debug String). Rather than include the library it is part of, I included it in the source file to keep this example simple. Fire up the project, open up a debug output window, press the button and we get this in the debug output window: 11:29:37.94 Pre HelloWorldAsync... 11:29:37.94 Post HelloWorldAsync 11:29:48.94 EventHandler 11:29:48.94 Hello World   Sweet. The asynchronous call was made and returned immediately. About 10 seconds later, the event handler fires and we get the result. Perfect….right? Not so fast cowboy. Watch the browser during the call: What the heck? The page is waiting for 10 seconds. Even though the asynchronous call returned immediately, Asp.Net is waiting for the event to fire before it renders the page. This is NOT what we wanted. I experimented with several techniques to work around this issue. Some may erroneously describe my behavior as 'hacking' but, since no ingesting of Twinkies was involved, I do not believe hacking is the appropriate term. If you examine the proxy that was automatically created, you will find a synchronous call to HelloWorld along with an additional set of methods to make asynchronous calls. I tried the other asynchronous method supplied in the proxy:     // ---- Begin and CallBack ----------------------------------     protected void ButtonBeginHelloWorld_Click(object sender, EventArgs e)    {        ODS("Pre BeginHelloWorld...");        MyService.BeginHelloWorld(AsyncCallback, null);        ODS("Post BeginHelloWorld");    }    public void AsyncCallback(IAsyncResult ar)    {        String Result = MyService.EndHelloWorld(ar);         ODS("AsyncCallback");        ODS("    " + Result);    } The BeginHelloWorld function in the proxy requires a callback function as a parameter. I tested it and the debug output window looked like this: 04:40:58.57 Pre BeginHelloWorld... 04:40:58.57 Post BeginHelloWorld 04:41:08.58 AsyncCallback 04:41:08.58 Hello World It works the same as before except for one critical difference: The page rendered immediately after the function call. I was worried the page object would be disposed after rendering the page but the system was smart enough to keep the page object in memory to handle the callback. Both techniques have a use: Delayed Render: Say you want to verify a credit card, look up shipping costs and confirm if an item is in stock. You could have three web service calls running in parallel and not render the page until all were finished. Nice. You can send information back to the client as part of the rendered page when all the services are finished. Immediate Render: Say you just want to start a service running and return to the client. You can do that too. However, the page gets sent to the client before the service has finished running so you will not be able to update parts of the page when the service finishes running. Summary: YourFunctionAsync() and an EventHandler will not render the page until the handler fires. BeginYourFunction() and a CallBack function will render the page as soon as possible. I found all this to be quite interesting and did a lot of searching and researching for documentation on this subject….but there isn't a lot out there. The biggest clues are the parameters that can be sent to the WSDL.exe program: http://msdn.microsoft.com/en-us/library/7h3ystb6(VS.100).aspx Two parameters are oldAsync and newAsync. OldAsync will create the Begin/End functions; newAsync will create the Async/Event functions. Caveat: I haven't tried this but it was stated in this article. I'll leave confirming this as an exercise for the student J. Included Code: I'm including the complete test project I created to verify the findings. The project was created with VS 2008 SP1. There is a solution file with 3 projects, the 3 projects are: Web Service Asp.Net Application Windows Forms Application To decide which program runs, you right-click a project and select "Set as Startup Project". I created and played with the Windows Forms application to see if it would reveal any secrets. I found that in the Windows Forms application, the generated proxy did NOT include the Begin/Callback functions. Those functions are only generated for Asp.Net pages. Probably for the reasons discussed earlier. Maybe those Microsoft boys and girls know what they are doing. I hope someone finds this useful. Steve Wellens

    Read the article

  • Visual Studio confused when there are multiple system.web sections in your web.config

    - by Jeff Widmer
    I am trying to start debugging in Visual Studio for the website I am currently working on but Visual Studio is telling me that I have to enable debugging in the web.config to continue: But I clearly have debugging enabled: At first I chose the option to Modify the Web.config file to enable debugging but then I started receiving the following exception on my site: HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Config section 'system.web/compilation' already defined. Sections must only appear once per config file. See the help topic <location> for exceptions   So what is going on here?  I already have debug=”true”, Visual Studio tells me I do not, and then when I give Visual Studio permission to fix the problem, I get a configuration error. Eventually I tracked it down to having two <system.web> sections. I had defined customErrors higher in the web.config: And then had a second system.web section with compilation debug=”true” further down in the web.config.  This is valid in the web.config and my site was not complaining but I guess Visual Studio does not know how to handle it and sees the first system.web, does not see the debug=”true” and thinks your site is not set up for debugging. To fix this so that Visual Studio was not going to complain, I removed the duplicate system.web declaration and moved the customErrors statement down.

    Read the article

  • Getting Query Parameters in Javascript

    - by PhubarBaz
    I find myself needing to get query parameters that are passed into a web app on the URL quite often. At first I wrote a function that creates an associative array (aka object) with all of the parameters as keys and returns it. But then I was looking at the revealing module pattern, a nice javascript design pattern designed to hide private functions, and came up with a way to do this without even calling a function. What I came up with was this nice little object that automatically initializes itself into the same associative array that the function call did previously. // Creates associative array (object) of query params var QueryParameters = (function() {     var result = {};     if (window.location.search)     {         // split up the query string and store in an associative array         var params = window.location.search.slice(1).split("&");         for (var i = 0; i < params.length; i++)         {             var tmp = params[i].split("=");             result[tmp[0]] = unescape(tmp[1]);         }     }     return result; }()); Now all you have to do to get the query parameters is just reference them from the QueryParameters object. There is no need to create a new object or call any function to initialize it. var debug = (QueryParameters.debug === "true"); or if (QueryParameters["debug"]) doSomeDebugging(); or loop through all of the parameters. for (var param in QueryParameters) var value = QueryParameters[param]; Hope you find this object useful.

    Read the article

  • Permission issues causes Unity Segmentation fault

    - by Dj Gilcrease
    I upgraded from 13.04 to 13.10 and I can boot to the unity-greeter and login just fine, but after login I just get a black screen with a cursor. I have tried following http://help.ubuntu.com/community/BinaryDriverHowto/ATI http://help.ubuntu.com/community/RadeonDriver and the manual install of the downloaded AMD drivers. All have the same affect. Also I have read Black screen after login with cursor I get a black screen after logging in ubuntu 13.04 black screen after login Ubuntu 13.10 - Black screen after login session Ubuntu 13.04 - Black screen with unresponsive cursor Upgrade to Ubuntu 13.04 Problem - Boots into Blank Black Screen All of which were no further help then the two support articles about ATI drivers So I switched back to the default drivers and went a little further into debugging, when I do ctrl+alt+F1 and login and try unity --debug > unity_start.log then ctrl+alt+F8 the screen stays black with a cursor and when I switch back ctrl+alt+F1 the contents of the log output are http://pastebin.com/rdQG4Hb0 However when I try sudo unity --debug > unity_start_root.log then ctrl+alt+F8, unity starts and the output of the log is http://pastebin.com/Yv4RD2j7 The fact that it starts as root tells be it is either a permissions issue of some required file or there is some setting that is specific to my user that is causing the SIGSEGV. So to narrow this down I activated the guest account and tried to login and got the same black screens with only a mouse cursor, so this tells me that it is not a configuration issue, but a permissions issue, so how do I narrow down which file has the wrong permissions? Also is there anything further that may help debug this issue? Ok after a few more hours of googling I found that if I add myself to the video group I can login and see the desktop, but there are lots of other permission related issues, so I am thinking something went wonky with PolicyKit during the upgrade, is there a way to reset PolicyKit settings for a user?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >