Search Results

Search found 5026 results on 202 pages for 'blocked threads'.

Page 85/202 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • FairWarning Privacy Monitoring Solutions Rely on MySQL to Secure Patient Data

    - by Rebecca Hansen
    FairWarning® solutions have audited well over 120 billion events, each of which was processed and stored in a MySQL database. FairWarning is the world's leading supplier of privacy monitoring solutions for electronic health records, relied on by over 1,200 Hospitals and 5,000 Clinics to keep their patients' data safe. In January 2014, FairWarning was awarded the highest commendation in healthcare IT as the first ever Category Leader for Patient Privacy Monitoring in the "2013 Best in KLAS: Software & Services" report[1]. FairWarning has used MySQL as their solutions’ database from their start in 2005 to worldwide expansion and market leadership. FairWarning recently migrated their solutions from MyISAM to InnoDB and updated from MySQL 5.5 to 5.6. Following are some of benefits they’ve had as a result of those changes and reasons for their continued reliance on MySQL (from FairWarning MySQL Case Study). Scalability to Handle Terabytes of Data FairWarning's customers have a lot of data: On average, FairWarning customers receive over 700,000 events to be processed daily. Over 25% of their customers receive over 30 million events per day, which equates to over 1 billion events and nearly one terabyte (TB) of new data each month. Databases range in size from a few hundred GBs to 10+ TBs for enterprise deployments (data are rolled off after 13 months). Low or Zero Admin = Few DBAs "MySQL has not required a lot of administration. After it's been tuned, configured, and optimized for size on initial setup, we have very low administrative costs. I can scale and add more customers without adding DBAs. This has had a big, positive impact on our business.” - Chris Arnold, FairWarning Vice President of Product Management and Engineering. Performance Schema  As the size of FairWarning's customers has increased, so have their tables and data volumes. MySQL 5.6’ new maintenance and management features have helped FairWarning keep up. In particular, MySQL 5.6 performance schema’s low-level metrics have provided critical insight into how the system is performing and why. Support for Mutli-CPU Threads MySQL 5.6' support for multiple concurrent CPU threads, and FairWarning's custom data loader allow multiple files to load into a single table simultaneously vs. one at a time. As a result, their data load time has been reduced by 500%. MySQL Enterprise Hot Backup Because hospitals and clinics never stop, FairWarning solutions can’t either. FairWarning changed from using mysqldump to MySQL Enterprise Hot Backup, which has reduced downtime, restore time, and storage requirements. For many of their larger customers, restore time has decreased by 80%. MySQL Enterprise Edition and Product Roadmap Provide Complete Solution "MySQL's product roadmap fully addresses our needs. We like the fact that MySQL Enterprise Edition has everything included; there's no need to purchase separate modules."  - Chris Arnold Learn More>> FairWarning MySQL Case Study Why MySQL 5.6 is an Even Better Embedded Database for Your Products presentation Updating Your Products to MySQL 5.6, Best Practices for OEMs on-demand webinar (audio and / or slides + Q&A transcript) MyISAM to InnoDB – Why and How on-demand webinar (same stuff) Top 10 Reasons to Use MySQL as an Embedded Database white paper [1] 2013 Best in KLAS: Software & Services report, January, 2014. © 2014 KLAS Enterprises, LLC. All rights reserved.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service

    - by Elton Stoneman
    We're in the process of delivering an enabling project to expose on-premise WCF services securely to Internet consumers. The Azure Service Bus Relay is doing the clever stuff, we register our on-premise service with Azure, consumers call into our .servicebus.windows.net namespace, and their requests are relayed and serviced on-premise. In theory it's all wonderfully simple; by using the relay we get lots of protocol options, free HTTPS and load balancing, and by integrating to ACS we get plenty of security options. Part of our delivery is a suite of sample consumers for the service - .NET, jQuery, PHP - and this set of posts will cover setting up the service and the consumers. Part 1: Exposing the on-premise service In theory, this is ultra-straightforward. In practice, and on a dev laptop it is - but in a corporate network with firewalls and proxies, it isn't, so we'll walkthrough some of the pitfalls. Note that I'm using the "old" Azure portal which will soon be out of date, but the new shiny portal should have the same steps available and be easier to use. We start with a simple WCF service which takes a string as input, reverses the string and returns it. The Part 1 version of the code is on GitHub here: on GitHub here: IPASBR Part 1. Configuring Azure Service Bus Start by logging into the Azure portal and registering a Service Bus namespace which will be our endpoint in the cloud. Give it a globally unique name, set it up somewhere near you (if you’re in Europe, remember Europe (North) is Ireland, and Europe (West) is the Netherlands), and  enable ACS integration by ticking "Access Control" as a service: Authenticating and authorizing to ACS When we try to register our on-premise service as a listener for the Service Bus endpoint, we need to supply credentials, which means only trusted service providers can act as listeners. We can use the default "owner" credentials, but that has admin permissions so a dedicated service account is better (Neil Mackenzie has a good post On Not Using owner with the Azure AppFabric Service Bus with lots of permission details). Click on "Access Control Service" for the namespace, navigate to Service Identities and add a new one. Give the new account a sensible name and description: Let ACS generate a symmetric key for you (this will be the shared secret we use in the on-premise service to authenticate as a listener), but be sure to set the expiration date to something usable. The portal defaults to expiring new identities after 1 year - but when your year is up *your identity will expire without warning* and everything will stop working. In production, you'll need governance to manage identity expiration and a process to make sure you renew identities and roll new keys regularly. The new service identity needs to be authorized to listen on the service bus endpoint. This is done through claim mapping in ACS - we'll set up a rule that says if the nameidentifier in the input claims has the value serviceProvider, in the output we'll have an action claim with the value Listen. In the ACS portal you'll see that there is already a Relying Party Application set up for ServiceBus, which has a Default rule group. Edit the rule group and click Add to add this new rule: The values to use are: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: serviceProvider Output claim type: net.windows.servicebus.action Output claim value: Listen When your service namespace and identity are set up, open the Part 1 solution and put your own namespace, service identity name and secret key into the file AzureConnectionDetails.xml in Solution Items, e.g: <azure namespace="sixeyed-ipasbr">    <!-- ACS credentials for the listening service (Part1):-->   <service identityName="serviceProvider"            symmetricKey="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>  </azure> Build the solution, and the T4 template will generate the Web.config for the service project with your Azure details in the transportClientEndpointBehavior:           <behavior name="SharedSecret">             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="nuR2tHhlrTCqf4YwjT2RA2BZ/+xa23euaRJNLh1a/V4="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> , and your service namespace in the Azure endpoint:         <!-- Azure Service Bus endpoints -->          <endpoint address="sb://sixeyed-ipasbr.servicebus.windows.net/net"                   binding="netTcpRelayBinding"                   contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> The sample project is hosted in IIS, but it won't register with Azure until the service is activated. Typically you'd install AppFabric 1.1 for Widnows Server and set the service to auto-start in IIS, but for dev just navigate to the local REST URL, which will activate the service and register it with Azure. Testing the service locally As well as an Azure endpoint, the service has a WebHttpBinding for local REST access:         <!-- local REST endpoint for internal use -->         <endpoint address="rest"                   binding="webHttpBinding"                   behaviorConfiguration="RESTBehavior"                   contract="Sixeyed.Ipasbr.Services.IFormatService" /> Build the service, then navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 - and you should see the reversed string response: If your network allows it, you'll get the expected response as before, but in the background your service will also be listening in the cloud. Good stuff! Who needs network security? Onto the next post for consuming the service with the netTcpRelayBinding.  Setting up network access to Azure But, if you get an error, it's because your network is secured and it's doing something to stop the relay working. The Service Bus relay bindings try to use direct TCP connections to Azure, so if ports 9350-9354 are available *outbound*, then the relay will run through them. If not, the binding steps down to standard HTTP, and issues a CONNECT across port 443 or 80 to set up a tunnel for the relay. If your network security guys are doing their job, the first option will be blocked by the firewall, and the second option will be blocked by the proxy, so you'll get this error: System.ServiceModel.CommunicationException: Unable to reach sixeyed-ipasbr.servicebus.windows.net via TCP (9351, 9352) or HTTP (80, 443) - and that will probably be the start of lots of discussions. Network guys don't really like giving servers special permissions for the web proxy, and they really don't like opening ports, so they'll need to be convinced about this. The resolution in our case was to put up a dedicated box in a DMZ, tinker with the firewall and the proxy until we got a relay connection working, then run some traffic which the the network guys monitored to do a security assessment afterwards. Along the way we hit a few more issues, diagnosed mainly with Fiddler and Wireshark: System.Net.ProtocolViolationException: Chunked encoding upload is not supported on the HTTP/1.0 protocol - this means the TCP ports are not available, so Azure tries to relay messaging traffic across HTTP. The service can access the endpoint, but the proxy is downgrading traffic to HTTP 1.0, which does not support tunneling, so Azure can’t make its connection. We were using the Squid proxy, version 2.6. The Squid project is incrementally adding HTTP 1.1 support, but there's no definitive list of what's supported in what version (here are some hints). System.ServiceModel.Security.SecurityNegotiationException: The X.509 certificate CN=servicebus.windows.net chain building failed. The certificate that was used has a trust chain that cannot be verified. Replace the certificate or change the certificateValidationMode. The evocation function was unable to check revocation because the revocation server was offline. - by this point we'd given up on the HTTP proxy and opened the TCP ports. We got this error when the relay binding does it's authentication hop to ACS. The messaging traffic is TCP, but the control traffic still goes over HTTP, and as part of the ACS authentication the process checks with a revocation server to see if Microsoft’s ACS cert is still valid, so the proxy still needs some clearance. The service account (the IIS app pool identity) needs access to: www.public-trust.com mscrl.microsoft.com We still got this error periodically with different accounts running the app pool. We fixed that by ensuring the machine-wide proxy settings are set up, so every account uses the correct proxy: netsh winhttp set proxy proxy-server="http://proxy.x.y.z" - and you might need to run this to clear out your credential cache: certutil -urlcache * delete If your network guys end up grudgingly opening ports, they can restrict connections to the IP address range for your chosen Azure datacentre, which might make them happier - see Windows Azure Datacenter IP Ranges. After all that you've hopefully got an on-premise service listening in the cloud, which you can consume from pretty much any technology.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Enable wireless on Dell Inspiron 1300

    - by Simon
    As per subject, I've looked at various resources and attempted ndiswrapper solutions, found a one-click solution that lead to a 404 and this but none works. I've run all updates. Once I managed to lose my wired connection as well and had to reinstall. This is my first hour with Linux. iwconfig gives this before I do anything: lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on eth0 no wireless extens Thanks for responding lspci returns 00:00.0 Host bridge: Intel Corporation Mobile 915GM/PM/GMS/910GML Express Processor to DRAM Controller (rev 03) Subsystem: Dell Device 01c9 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx- Latency: 0 Capabilities: <access denied> Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation Mobile 915GM/GMS/910GML Express Graphics Controller (rev 03) (prog-if 00 [VGA controller]) Subsystem: Dell Device 01c9 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at dff00000 (32-bit, non-prefetchable) [size=512K] Region 1: I/O ports at eff8 [size=8] Region 2: Memory at c0000000 (32-bit, prefetchable) [size=256M] Region 3: Memory at dfec0000 (32-bit, non-prefetchable) [size=256K] Expansion ROM at <unassigned> [disabled] Capabilities: <access denied> Kernel driver in use: i915 Kernel modules: intelfb, i915 00:02.1 Display controller: Intel Corporation Mobile 915GM/GMS/910GML Express Graphics Controller (rev 03) Subsystem: Dell Device 01c9 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Region 0: Memory at dff80000 (32-bit, non-prefetchable) [size=512K] Capabilities: <access denied> 00:1b.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 03) Subsystem: Dell Device 01c9 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 42 Region 0: Memory at dfebc000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) PCI Express Port 1 (rev 03) (prog-if 00 [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Bus: primary=00, secondary=0b, subordinate=0b, sec-latency=0 I/O behind bridge: 00002000-00002fff Memory behind bridge: 30000000-301fffff Prefetchable memory behind bridge: 0000000030200000-00000000303fffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) PCI Express Port 4 (rev 03) (prog-if 00 [Normal decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Bus: primary=00, secondary=0c, subordinate=0d, sec-latency=0 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: dfc00000-dfdfffff Prefetchable memory behind bridge: 00000000d0000000-00000000d01fffff Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB UHCI #1 (rev 03) (prog-if 00 [UHCI]) Subsystem: Dell Device 01c9 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 4: I/O ports at bf80 [size=32] Kernel driver in use: uhci_hcd 00:1d.1 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB UHCI #2 (rev 03) (prog-if 00 [UHCI]) Subsystem: Dell Device 01c9 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin B routed to IRQ 17 Region 4: I/O ports at bf60 [size=32] Kernel driver in use: uhci_hcd 00:1d.2 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB UHCI #3 (rev 03) (prog-if 00 [UHCI]) Subsystem: Dell Device 01c9 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin C routed to IRQ 18 Region 4: I/O ports at bf40 [size=32] Kernel driver in use: uhci_hcd 00:1d.3 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB UHCI #4 (rev 03) (prog-if 00 [UHCI]) Subsystem: Dell Device 01c9 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin D routed to IRQ 19 Region 4: I/O ports at bf20 [size=32] Kernel driver in use: uhci_hcd 00:1d.7 USB controller: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) USB2 EHCI Controller (rev 03) (prog-if 20 [EHCI]) Subsystem: Dell Device 01c9 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: Memory at b0000000 (32-bit, non-prefetchable) [size=1K] Capabilities: <access denied> Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev d3) (prog-if 01 [Subtractive decode]) Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=32 I/O behind bridge: 0000f000-00000fff Memory behind bridge: dfb00000-dfbfffff Prefetchable memory behind bridge: 00000000fff00000-00000000000fffff Secondary status: 66MHz- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort+ <SERR- <PERR- BridgeCtl: Parity- SERR+ NoISA- VGA- MAbort- >Reset- FastB2B- PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn- Capabilities: <access denied> 00:1f.0 ISA bridge: Intel Corporation 82801FBM (ICH6M) LPC Interface Bridge (rev 03) Subsystem: Dell Device 01c9 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Kernel modules: iTCO_wdt, intel-rng 00:1f.1 IDE interface: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) IDE Controller (rev 03) (prog-if 8a [Master SecP PriP]) Subsystem: Dell Device 01c9 Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0 Interrupt: pin A routed to IRQ 16 Region 0: I/O ports at 01f0 [size=8] Region 1: I/O ports at 03f4 [size=1] Region 2: I/O ports at 0170 [size=8] Region 3: I/O ports at 0374 [size=1] Region 4: I/O ports at bfa0 [size=16] Kernel driver in use: ata_piix 02:00.0 Ethernet controller: Broadcom Corporation BCM4401-B0 100Base-TX (rev 02) Subsystem: Dell Device 01c9 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 64 Interrupt: pin A routed to IRQ 18 Region 0: Memory at dfbfc000 (32-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: b44 Kernel modules: b44 02:03.0 Network controller: Broadcom Corporation BCM4318 [AirForce One 54g] 802.11g Wireless LAN Controller (rev 02) Subsystem: Dell Wireless 1370 WLAN Mini-PCI Card Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 64 Interrupt: pin A routed to IRQ 17 Region 0: Memory at dfbfe000 (32-bit, non-prefetchable) [size=8K] Kernel driver in use: b43-pci-bridge Kernel modules: ssb and the rfkill shows 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no Just checking addtional drivers. Says no additional driver installed in this system

    Read the article

  • What's wrong with my wireless?

    - by dazzle
    I am having issues with my wireless connection. My connection is constantly disconnecting, then attempting to reconnect, reconnecting momentarily, then disconnecting etc. on times scales that range from seconds to minutes. In the meantime, needless to say I'm having significant packet loss. I'm running Ubuntu 14.04 64bit, updated and upgraded to today. Here is my card and driver: delta@sager:~$ lspci -vq | grep -i wireless -B 1 -A 5 04:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73) Subsystem: Intel Corporation Dual Band Wireless-AC 7260 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7d00000 (64-bit, non-prefetchable) [size=8K] Capabilities: Kernel driver in use: iwlwifi Here is my kernel: delta@sager:~$ uname -r 3.13.0-34-generic None of the other machines on my home network are having these issues. Windows Vista is networking without issue for goodness sake ;-) Here is a small clipping from the output of dmesg. As you can see, I am getting a cfg80211 message of some sort over and over again (FYI, I've replaced my MAC address with a series of dashes, so anytime there is a ---------------, that was where the MAC address was: [ 1881.739161] wlan1: authenticate with --------------- [ 1881.741561] wlan1: send auth to --------------- (try 1/3) [ 1881.743440] wlan1: authenticated [ 1881.746027] wlan1: associate with --------------- (try 1/3) [ 1881.749244] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1881.754727] wlan1: associated [ 1881.754827] cfg80211: Calling CRDA for country: US [ 1881.761552] cfg80211: Regulatory domain changed to country: US [ 1881.761559] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1881.761564] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1881.761568] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1881.761571] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761574] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761577] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761580] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1881.761584] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1882.391038] cfg80211: Calling CRDA to update world regulatory domain [ 1882.396254] cfg80211: World regulatory domain updated: [ 1882.396260] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1882.396265] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396268] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396271] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1882.396274] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396277] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.148252] wlan1: authenticate with --------------- [ 1886.150005] wlan1: send auth to --------------- (try 1/3) [ 1886.151807] wlan1: authenticated [ 1886.154847] wlan1: associate with --------------- (try 1/3) [ 1886.158147] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1886.163464] wlan1: associated [ 1886.163520] wlan1: Limiting TX power to 30 (30 - 0) dBm as advertised by --------------- [ 1886.163588] cfg80211: Calling CRDA for country: US [ 1886.170500] cfg80211: Regulatory domain changed to country: US [ 1886.170508] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1886.170513] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1886.170517] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1886.170520] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170523] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170526] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170529] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1886.170533] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1887.200197] cfg80211: Calling CRDA to update world regulatory domain [ 1887.203655] cfg80211: World regulatory domain updated: [ 1887.203659] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1887.203662] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203664] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203666] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1887.203668] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203670] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) I've poked around on AskUbuntu, and have not found any adequate solutions; have also found similar threads that were left unanswered. Any advice/experience/threads I might be able to pull on would be greatly appreciated. In your opinion, is this a kernel issue, hardware issue, etc.? Thanks in advance. EDIT: chili, here's the output of iwconfig: delta@sager:~$ iwconfig wlan1 IEEE 802.11abg ESSID:"LANbeforetime" Mode:Managed Frequency:2.412 GHz Access Point: ----------- Bit Rate=48 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=44/70 Signal level=-66 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:80 Missed beacon:0 eth0 no wireless extensions. lo no wireless extensions.

    Read the article

  • ConcurrentDictionary<TKey,TValue> used with Lazy<T>

    - by Reed
    In a recent thread on the MSDN forum for the TPL, Stephen Toub suggested mixing ConcurrentDictionary<T,U> with Lazy<T>.  This provides a fantastic model for creating a thread safe dictionary of values where the construction of the value type is expensive.  This is an incredibly useful pattern for many operations, such as value caches. The ConcurrentDictionary<TKey, TValue> class was added in .NET 4, and provides a thread-safe, lock free collection of key value pairs.  While this is a fantastic replacement for Dictionary<TKey, TValue>, it has a potential flaw when used with values where construction of the value class is expensive. The typical way this is used is to call a method such as GetOrAdd to fetch or add a value to the dictionary.  It handles all of the thread safety for you, but as a result, if two threads call this simultaneously, two instances of TValue can easily be constructed. If TValue is very expensive to construct, or worse, has side effects if constructed too often, this is less than desirable.  While you can easily work around this with locking, Stephen Toub provided a very clever alternative – using Lazy<TValue> as the value in the dictionary instead. This looks like the following.  Instead of calling: MyValue value = dictionary.GetOrAdd( key, () => new MyValue(key)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would instead use a ConcurrentDictionary<TKey, Lazy<TValue>>, and write: MyValue value = dictionary.GetOrAdd( key, () => new Lazy<MyValue>( () => new MyValue(key))) .Value; This simple change dramatically changes how the operation works.  Now, if two threads call this simultaneously, instead of constructing two MyValue instances, we construct two Lazy<MyValue> instances. However, the Lazy<T> class is very cheap to construct.  Unlike “MyValue”, we can safely afford to construct this twice and “throw away” one of the instances. We then call Lazy<T>.Value at the end to fetch our “MyValue” instance.  At this point, GetOrAdd will always return the same instance of Lazy<MyValue>.  Since Lazy<T> doesn’t construct the MyValue instance until requested, the actual MyClass instance returned is only constructed once.

    Read the article

  • Profiling Startup Of VS2012 &ndash; dotTrace Profiler

    - by Alois Kraus
    Jetbrains which is famous for the Resharper tool has also a profiler in its portfolio. I downloaded dotTrace 5.2 Professional (569€+VAT) to check how far I can profile the startup of VS2012. The most interesting startup option is “.NET Process”. With that you can profile the next started .NET process which is very useful if you want to profile an application which is not started by you.     I did select Tracing as and Wall time to get similar options across all profilers. For some reason the attach option did not work with .NET 4.5 on my home machine. But I am sure that it did work with .NET 4.0 some time ago. Since we are profiling devenv.exe we can also select “Standalone Application” and start it from the profiler. The startup time of VS does increase about a factor 3 but that is ok. You get mainly three windows to work with. The first one shows the threads where you can drill down thread wise where most time is spent. I The next window is the call tree which does merge all threads together in a similar view. The last and most useful view in my opinion is the Plain List window which is nearly the same as the Method Grid in Ants Profiler. But this time we do get when I enable the Show system functions checkbox not a 150 but 19407 methods to choose from! I really tried with Ants Profiler to find something about out how VS does work but look how much we were missing! When I double click on a method I do get in the lower pane the called methods and their respective timings. This is something really useful and I can nicely drill down to the most important stuff. The measured time seems to be Wall Clock time which is a good thing to see where my time is really spent. You can also use Sampling as profiling method but this does give you much less information. Except for getting a first idea where to look first this profiling mode is not very useful to understand how you system does interact.   The options have a good list of presets to hide by default many method and gray them out to concentrate on your code. It does not filter anything out if you enable Show system functions. By default methods from these assemblies are hidden or if the checkbox is checked grayed out. All in all JetBrains has made a nice profiler which does show great detail and it has nice drill down capabilities. The only thing is that I do not trust its measured timings. I did fall several times into the trap with this one to optimize at places which were already fast but the profiler did show high times in these methods. After measuring with Tracing I was certain that the measured times were greatly exaggerated. Especially when IO is involved it seems to have a hard time to subtract its own overhead. What I did miss most was the possibility to profile not only the next started process but to be able to select a process by name and perhaps a count to profile the next n processes of this name. Next: YourKit

    Read the article

  • Best way to process a queue in C# (PDF treatment)

    - by Bartdude
    First of all let me expose what I would like to do : I already dispose of a long-time running webapp developed in ASP.NET (C#) 2.0. In this app, users can upload standard PDF files (text+pics). The app is running in production on a Windows Server 2003 and has a dedicated database server (SQL server 2008) also running Windows Server 2003. I myself am a quite experienced web developer, but never actually programmed anything non-web (or at least nothing serious). I plan on adding a functionality to the webapp for which I would need a jpg snapshot of each page of the PDF. Creating these "thumbnails" isn't the big deal as such, I already do it inside my webapp using ghostscript. I've only done it on 1 page documents for now though, and the new functionality will need to process bigger documents. In order for this process to be transparent aswell for the admins as the final users, I would like to implement some kind of queue to delay the processing of the thumbnails. There again, no problem to create the queue, it will consist of records in a table, with enough info to find the pdf document back. Then I will need to process this queue, and that's were my interrogations start. Obviously the best solution to process it isn't an ASP script or so, so I will have to get out of my known environment. No problem, but I have no idea which direction to go. Therefore, a few questions : What should I develop ? I presumably need something that is "standby" on the server, runs when needed, then returns to idle state until further notice.Should I be looking into Windows service ? Is there another more appropriate type of project ? Depending on the first answer, what will be the approach ? Should I have somehow SQL server "tell" the program/service/... to process the queue, or should I have that program/service/... periodically check the state of the queue and treat new items. In both case, which functionality can I use ? we're not talking about hundreds of PDF a day (max 50 maybe), I can totally afford to treat the queue 1 item at a time. Can you confirm I don't have to look much further on threads and so ? (I found a lot of answers talking about threads in queue treatment, but it looks quite overkill for my needs) Maybe linked to the previous question : what about concurrent call to the program, whatever it is ? Let's suppose it is currently running, and a new record comes in the queue, what should be the behaviour ? I don't need much detailed answers and would already be happy with answers like "You can do the processing with a service, and yes it's possible to have sqlserver on machine A trigger a service start on machine B" or "You have to develop xxx and then use the scheduler to run it every xxx minutes". I don't mind reading articles and so, but I can hardly afford to spend too much time learning stuff to finally realize I went the wrong way for this project, so basically I'm trying to narrow down the scope of matters I need to investigate. Thanks for reading me, I hope I'll find some helping hands on here :-)

    Read the article

  • Is this a good implementation of DefaultHttpClient and ThreadSafeClientConnManager in Android?

    - by johnrock
    In my Android app I am sharing one httpclient for all activities/threads. All requests are made by callling getHttpClient().execute(httpget) or getHttpClient().execute(httppost). Is this implementation complete/correct and safe for multiple threads? Is there anything else missing i.e. Do I have to worry about releasing connections at all? private static HttpClient httpclient ; public static HttpClient getHttpClient() { if(httpclient == null){ return getHttpClientNew(); } else{ return httpclient; } } public static synchronized HttpClient getHttpClientNew() { HttpParams params = new BasicHttpParams(); ConnManagerParams.setMaxTotalConnections(params, 100); HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1); HttpProtocolParams.setContentCharset(params, "UTF_8"); HttpProtocolParams.setUseExpectContinue(params, false); HttpConnectionParams.setConnectionTimeout(params, 10000); HttpConnectionParams.setSoTimeout(params, 10000); SchemeRegistry schemeRegistry = new SchemeRegistry(); schemeRegistry.register(new Scheme("http", PlainSocketFactory.getSocketFactory(), 80)); ClientConnectionManager cm = new ThreadSafeClientConnManager(params, schemeRegistry); httpclient = new DefaultHttpClient(cm, params); return httpclient; } This is an example of how the httpclient is used: private void update() { HttpGet httpget = new HttpGet(URL); httpget.setHeader(USER_AGENT, userAgent); httpget.setHeader(CONTENT_TYPE, MGUtils.APP_XML); HttpResponse response; try { response = getHttpClient().execute(httpget); HttpEntity entity = response.getEntity(); if (entity != null) { // parse stuff } } catch (Exception e) { } }

    Read the article

  • Multithreading recommendation based on program description

    - by user260197
    I would like to describe some specifics of my program and get feedback on what the best multithreading model to use would be most applicable. I've spent a lot of time now reading on ThreadPool, Threads, Producer/Consumer, etc. and have yet to come to solid conclusions. I have a list of files (all the same format) but with different contents. I have to perform work on each file. The work consists of reading the file, some processing that takes about 1-2 minutes of straight number crunching, and then writing large output files at the end. I would like the UI interface to still be responsive after I initiate the work on the specified files. Some questions: What model/mechanisms should I use? Producer/Consumer, WorkPool, etc. Should I use a BackgroundWorker in the UI for responsiveness or can I launch the threading from within the Form as long as I leave the UI thread alone to continue responding to user input? How could I take results or status of each individual work on each file and report it to the UI in a thread safe way to give user feedback as the work progresses (there can be close to 1000 files to process) Update: Great feedback so far, very helpful. I'm adding some more details that are asked below: Output is to multiple independent files. One set of output files per "work item" that then themselves gets read and processed by another process before the "work item" is complete The work items/threads do not share any resources. The work items are processed in part using a unmanaged static library that makes use of boost libraries.

    Read the article

  • Segmentation fault on MPI, runs properly on OpenMP

    - by Bellman
    Hi, I am trying to run a program on a computer cluster. The structure of the program is the following: PROGRAM something ... CALL subroutine1(...) ... END PROGRAM SUBROUTINE subroutine1(...) ... DO i=1,n CALL subroutine2(...) ENDDO ... END SUBROUTINE SUBROUTINE subroutine2(...) ... CALL subroutine3(...) CALL subroutine4(...) ... END SUBROUTINE The idea is to parallelize the loop that calls subroutine2. Main program basically only makes the call to subroutine1 and only its arguments are declared. I use two alternatives. On the one hand, I write OpenMP clauses arround the loop. On the other hand, I add an IF conditional branch arround the call and I use MPI to share the results. In the OpenMP case, I add CALL KMP_SET_STACKSIZE(402653184) at the beginning of the main program and I can run it with 8 threads on an 8 core machine. When I run it (on the same 8 core machine) with MPI (either using 8 or 1 processors) it crashes just when makes the call to subroutine3 with a segmentation fault (signal 11) error. If I comment subroutine4, then it doesn't crash (notice that it crashed just when calling subroutine3 and it works when commenting subroutine4). I compile with mpif90 using MPICH2 libraries and the following flags: -O3 -fpscomp logicals -openmp -threads -m64 -xS. The machine has EM64T architecture and I use a Debian Linux distribution. I set ulimit -s hard before running the program. Any ideas on what is going on? Has it something to do with stack size? Thanks in advance

    Read the article

  • Visual Studio 2008 Web Project error: Unable to start program http://localhost:port

    - by JookyDFW
    I am re-hashing this question because I have looked at over 50 threads in different forums and have not been able to get a resolution to my problem. Here are the specs: Windows XP SP3, Visual Studio 2008 SP1, .NET 3.5, ASP. NET MVC 2 project, IE 7 (was IE 8) Up until a few days ago I was not having any issues. It is now happening on any solution that I try to debug. I start a debug session (F5), the solution rebuilds, a VS development web server starts and then I get this error: Unable to start program http://localhost:2012/ If I open a web browser and enter the URL the application loads up. I had upgraded to IE 8 a few weeks ago and read there may be some issues so it has been uninstalled and I am currently on IE 7. Also, while IE 8 was installed I had switched my default browser to Firefox but my current default broweser is now IE7. I have reveiewed the threads on this site and others and have not been able to fix the issue. Any help would be appreciated.

    Read the article

  • C# Thread Pool Cross-Thread Communication

    - by Goober
    The Scenario I have a windows forms application containing a MAINFORM with a listbox on it. The MAINFORM also has a THREAD POOL that creates new threads and fires them off to do lots of different bits of processing. Whilst each of these different worker threads is doing its job, I want to report this progress back to my MAINFORM, however, I can't because it requires Cross-Thread communication. Progress So far all of the tutorials etc. that I have seen relating to this topic involve custom(ish) threading implementations, whereas I literally have a fairly basic(ish) standard THREAD POOL implementation. Since I don't want to really modify any of my code (since the application runs like a beast with no quarms) - I'm after some advice as to how I can go about doing this cross-thread communication. ALTERNATIVELY - How to implement a different "LOGTOSCREEN" method altogether (obviously still bearing in mind the cross-thread communication thing). WARNING: I use this website at work, where we are locked down to IE6 only, and the javascript thus fails, meaning I cannot click accept on any answers during work, and thus my acceptance rate is low. I can't do anything about it I'm afraid, sorry. EDIT: I DO NOT HAVE INSTALL RIGHTS ON MY COMPUTER AT WORK. I do have firefox but the proxy at work fails when using this site on firefox. FURTHER EDIT: I DO NOT WANT TO CHANGE MY THREADING IMPLEMENTATION. AT ALL! - Accept to enable cross-thread communication....why would a backgroundworker help here!?

    Read the article

  • solve a classic map-reduce problem with opencl?

    - by liuliu
    I am trying to parallel a classic map-reduce problem (which can parallel well with MPI) with OpenCL, namely, the AMD implementation. But the result bothers me. Let me brief about the problem first. There are two type of data that flow into the system: the feature set (30 parameters for each) and the sample set (9000+ dimensions for each). It is a classic map-reduce problem in the sense that I need to calculate the score of every feature on every sample (Map). And then, sum up the overall score for every feature (Reduce). There are around 10k features and 30k samples. I tried different ways to solve the problem. First, I tried to decompose the problem by features. The problem is that the score calculation consists of random memory access (pick some of the 9000+ dimensions and do plus/subtraction calculations). Since I cannot coalesce memory access, it costs. Then, I tried to decompose the problem by samples. The problem is that to sum up overall score, all threads are competing for few score variables. It keeps overwriting the score which turns out to be incorrect. (I cannot carry out individual score first and sum up later because it requires 10k * 30k * 4 bytes). The first method I tried gives me the same performance on i7 860 CPU with 8 threads. However, I don't think the problem is unsolvable: it is remarkably similar to ray tracing problem (for which you carry out calculation that millions of rays against millions of triangles). Any ideas?

    Read the article

  • Cannot use READPAST in snapshot isolation mode

    - by Marcus
    I have a process which is called from multiple threads which does the following: Start transaction Select unit of work from work table with by finding the next row where IsProcessed=0 with hints (UPDLOCK, HOLDLOCK, READPAST) Process the unit of work (C# and SQL stored procedures) Commit the transaction The idea of this is that a thread dips into the pool for the "next" piece of work, and processes it, and the locks are there to ensure that a single piece of work is not processed twice. (the order doesn't matter). All this has been working fine for months. Until today that is, when I happened to realise that despite enabling snapshot isolation and making it the default at the database level, the actual transaction creation code was manually setting an isolation level of "ReadCommitted". I duly changed that to "Snapshot", and of course immediately received the "You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ" error message. Oops! The main reason for locking the row was to "mark the row" in such a way that the "mark" would be removed when the transaction that applied the mark was committed and the lock seemed to be the best way to do this, since this table isn't read otherwise except by these threads. If I were to use the IsProcessed flag as the lock, then presumably I would need to do the update first, and then select the row I just updated, but I would need to employ the NOLOCK flag to know whether any other thread had set the flag on a row. All sounds a bit messy. The easiest option would be to abandon the snapshot isolation mode altogether, but the design of step #3 requires it. Any bright ideas on the best way to resolve this problem? Thanks Marcus

    Read the article

  • BASH statements execute alone but return "no such file" in for loop.

    - by reve_etrange
    Another one I can't find an answer for, and it feels like I've gone mad. I have a BASH script using a for loop to run a complex command (many protein sequence alignments) on a lot of files (~5000). The loop produces statements that will execute when given alone (i.e. copy-pasted from the error message to the command prompt), but which return "no such file or directory" inside the loop. Script below; there are actually several more arguments but this includes some representative ones and the file arguments. #!/bin/bash # Pass directory with targets as FASTA sequences as argument. # Arguments to psiblast # Common db=local/db/nr/nr outfile="/mnt/scratch/psi-blast" e=0.001 threads=8 itnum=5 pssm="/mnt/scratch/psi-blast/pssm." pssm_txt="/mnt/scratch/psi-blast/pssm." pseudo=0 pwa_inclusion=0.002 for i in ${1}/* do filename=$(basename $i) "local/ncbi-blast-2.2.23+/bin/psiblast\ -query ${i}\ -db $db\ -out ${outfile}/${filename}.out\ -evalue $e\ -num_threads $threads\ -num_iterations $itnum\ -out_pssm ${pssm}$filename\ -out_ascii_pssm ${pssm_txt}${filename}.txt\ -pseudocount $pseudo\ -inclusion_ethresh $pwa_inclusion" done Running this scripts gives "<scriptname> line <last line before 'done'>: <attempted command> : No such file or directory. If I then paste the attempted command onto the prompt it will run. Each of these commands takes a couple of minutes to run.

    Read the article

  • WPF Background Thread Invocation

    - by jeffn825
    Maybe I'm mis-remembering how Winforms works or I'm overcomplicating the hell out of this, but here's my problem. I have a WPF client app application that talks to a server over WCF. The current user may "log out" of the WPF client, which closes all open screens, leaves only the navigation pane, and minimizes the program window. When the user re-maximizes the program window, they are prompted to log in. Simple. But sometimes things happen on background threads - like every 5 minutes the client tries to make a WCF calls that refreshes some cached data. And what if the user is logged out when this 5 minute timer triggers? Well, then the user should be prompted to log back in...and this must of course happen on the UI thread. private static ISecurityContext securityContext; public static ISecurityContext SecurityContext { get { if (securityContext == null) { // Login method shows a window and prompts the user to log in Application.Current.Dispatcher.Invoke((Action)Login); } return securityContext; } } So far so good, right? But what happens when multiple threads hit this spot of code? Well, my first intuition was that since I'm syncrhonizing across the Application.Current.Dispatcher, I should be fine, and whichever thread hit first would be responsible for showing the login form and getting the user logged in... Not the case... Thread 1 will hit the code and call ShowDialog on the login form Thread 2 will also hit the code and will call Login as soon as Thread 1 has called ShowDialog, since calling ShowDialog unblocked Thread 1 (I believe because of the way the WPF message pump works) All I want is a synchronized way of getting the user logged back into the application...what am I missing here? Thanks in advance.

    Read the article

  • Calculate time of method execution and send to WCF service async

    - by Tim
    I need to implement time calculation for repository methods in my asp .net mvc project classes. The problem is that i need to send time calculation data to WCF Service which is time consuming. I think about threads which can help to cal WCF service asynchronously. But I have very little experience with it. Do I need to create new thread each time or I can create a global thread, if so then how? I have something like that: StopWatch class public class StopWatch { private DateTime _startTime; private DateTime _endTime; public void Start() { _startTime = DateTime.Now; } protected void StopTimerAndWriteStatistics() { _endTime = DateTime.Now; TimeSpan timeResult = _endTime - _startTime; //WCF proxy object var reporting = AppServerUtility.GetProxy<IReporting>(); //Send data to server reporting.WriteStatistics(_startTime, _endTime, timeResult, "some information"); } public void Stop() { //Here is the thread I have question with var thread = new Thread(StopTimerAndWriteStatistics); thread.Start(); } } Using of StopWatch class in Repository public class SomeRepository { public List<ObjectInfo> List() { StopWatch sw = new StopWatch(); sw.Start(); //performing long time operation sw.Stop(); } } What am I doing wrong with threads?

    Read the article

  • Android: Determine when an app is being finalized vs destroyed for screen orientation change

    - by Matt
    Hi all, I am relatively new to the Android world and am having some difficultly understanding how the whole screen orientation cycle works. I understand that when the orientation changes from portrait to landscape or vice versa the activity is destroyed and then re-created. Thus all the code in the onCreate function will run again. So here's my situation: I have an app that I am working on where it logs into a website, retrieves data, and displays it to the user. While this is all done in background threads, the code that starts these threads is in the onCreate function. Now, the problem lies in that whenever the user changes the screen orientation, the app will log in, retrieve the data, and display it to the user again. What I would like to do is set a boolean that tells the app if it is logged in or not so it knows whether or not it must log in when the onCreate function is called. So long as the app is in memory the HttpClient will exist and contain the cookies from logging the user in but when the app is killed by the system those will go away. So I would assume that I need to do something like setting the logged in boolean to false when the app is killed but since onDestroy is called when the screen is rotated how is this possible? I also looked into the finalize function and isFinishing() but those seem to not be working. Shorter version: How can I distinguish between when an app is being killed from memory from when an activity is being rotated and different code for each event? Any help or a point in the right direction is greatly appreciated. Thank you!

    Read the article

  • Production settings file for log4j?

    - by James
    Here is my current log4j settings file. Are these settings ideal for production use or is there something I should remove/tweak or change? I ask because I was getting all my threads being hung due to log4j blocking. I checked my open file descriptors I was only using 113. # ***** Set root logger level to WARN and its two appenders to stdout and R. log4j.rootLogger=warn, stdout, R # ***** stdout is set to be a ConsoleAppender. log4j.appender.stdout=org.apache.log4j.ConsoleAppender # ***** stdout uses PatternLayout. log4j.appender.stdout.layout=org.apache.log4j.PatternLayout # ***** Pattern to output the caller's file name and line number. log4j.appender.stdout.layout.ConversionPattern=%5p [%t] (%F:%L) - %m%n # ***** R is set to be a RollingFileAppender. log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=logs/myapp.log # ***** Max file size is set to 100KB log4j.appender.R.MaxFileSize=102400KB # ***** Keep one backup file log4j.appender.R.MaxBackupIndex=5 # ***** R uses PatternLayout. log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%p %t %d %c - %m%n #set httpclient debug levels log4j.logger.org.apache.component=ERROR,stdout log4j.logger.httpclient.wire=ERROR,stdout log4j.logger.org.apache.commons.httpclient=ERROR,stdout log4j.logger.org.apache.http.client.protocol=ERROR,stdout UPDATE*** Adding thread dump sample from all my threads (100) "pool-1-thread-5" - Thread t@25 java.lang.Thread.State: BLOCKED on org.apache.log4j.spi.RootLogger@1d45a585 owned by: pool-1-thread-35 at org.apache.log4j.Category.callAppenders(Category.java:201) at org.apache.log4j.Category.forcedLog(Category.java:388) at org.apache.log4j.Category.error(Category.java:302)

    Read the article

  • Java synchronized seems ignored

    - by viraptor
    Hi, I've got the following code, which I expected to deadlock after printing out "Main: pre-sync". But it looks like synchronized doesn't do what I expect it to. What happens here? import java.util.*; public class deadtest { public static class waiter implements Runnable { Object obj; public waiter(Object obj) { this.obj = obj; } public void run() { System.err.println("Thead: pre-sync"); synchronized(obj) { System.err.println("Thead: pre-wait"); try { obj.wait(); } catch (Exception e) { } System.err.println("Thead: post-wait"); } System.err.println("Thead: post-sync"); } } public static void main(String args[]) { Object obj = new Object(); System.err.println("Main: pre-spawn"); Thread waiterThread = new Thread(new waiter(obj)); waiterThread.start(); try { Thread.sleep(1000); } catch (Exception e) { } System.err.println("Main: pre-sync"); synchronized(obj) { System.err.println("Main: pre-notify"); obj.notify(); System.err.println("Main: post-notify"); } System.err.println("Main: post-sync"); try { waiterThread.join(); } catch (Exception e) { } } } Since both threads synchronize on the created object, I expected the threads to actually block each other. Currently, the code happily notifies the other thread, joins and exits.

    Read the article

  • Troubleshooting a COM+ application deadlock

    - by Chris Karcher
    I'm trying to troubleshoot a COM+ application that deadlocks intermittently. The last time it locked up, I was able to take a usermode dump of the dllhost process and analyze it using WinDbg. After inspecting all the threads and locks, it all boils down to a critical section owned by this thread: ChildEBP RetAddr Args to Child 0deefd00 7c822114 77e6bb08 000004d4 00000000 ntdll!KiFastSystemCallRet 0deefd04 77e6bb08 000004d4 00000000 0deefd48 ntdll!ZwWaitForSingleObject+0xc 0deefd74 77e6ba72 000004d4 00002710 00000000 kernel32!WaitForSingleObjectEx+0xac 0deefd88 75bb22b9 000004d4 00002710 00000000 kernel32!WaitForSingleObject+0x12 0deeffb8 77e660b9 000a5cc0 00000000 00000000 comsvcs!PingThread+0xf6 0deeffec 00000000 75bb21f1 000a5cc0 00000000 kernel32!BaseThreadStart+0x34 The object it's waiting on is an event: 0:016> !handle 4d4 f Handle 000004d4 Type Event Attributes 0 GrantedAccess 0x1f0003: Delete,ReadControl,WriteDac,WriteOwner,Synch QueryState,ModifyState HandleCount 2 PointerCount 4 Name <none> No object specific information available As far as I can tell, the event never gets signaled, causing the thread to hang and hold up several other threads in the process. Does anyone have any suggestions for next steps in figuring out what's going on? Now, seeing as the method is called PingThread, is it possible that it's trying to ping another thread in the process that's already deadlocked? UPDATE This actually turned out to be a bug in the Oracle 10.2.0.1 client. Although, I'm still interested in ideas on how I could have figured this out without finding the bug in Oracle's bug database.

    Read the article

  • How to implement a counter when using golang's goroutine?

    - by MrROY
    I'm trying to make a queue struct that have push and pop functions. I need to use 10 threads push and another 10 threads pop data, just like i did in the code below. Questions : 1. I need to print out how much i have pushed/popped, but i don't know how to do that. 2. Is there anyway to speed up my code ? the code is too slow for me. package main import ( "runtime" "time" ) const ( DATA_SIZE_PER_THREAD = 10000000 ) type Queue struct { records string } func (self Queue) push(record chan interface{}) { // need push counter record <- time.Now() } func (self Queue) pop(record chan interface{}) { // need pop counter <- record } func main() { runtime.GOMAXPROCS(runtime.NumCPU()) //record chan record := make(chan interface{},1000000) //finish flag chan finish := make(chan bool) queue := new(Queue) for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.push(record) } finish<-true }() } for i:=0; i<10; i++ { go func() { for j:=0; j<DATA_SIZE_PER_THREAD; j++ { queue.pop(record) } finish<-true }() } for i:=0; i<20; i++ { <-finish } }

    Read the article

  • Proof of library bug vs developer side application bug

    - by Paralife
    I have a problem with a specific java client library. I wont say here the problem or the name of the library because my question is a different one. Here is the situation: I have made a program that uses the library. The program is a class named 'WorkerThread' that extends Thread. To start it I have made a Main class that only contains a main() function that starts the thread and nothing else. The worker uses the library to perform comm with a server and get results. The problem appears when I want to run 2 WorkerThreads simultaneously. What I first did was to do this in the Main class: public class Main { public static void main(String args[]) { new WorkerThread().start(); // 1st thread. new WorkerThread().start(); // 2nd thread. } } When I run this, both threads produce irrational results and what is more , some results that should be received by 1st thread are received by the 2nd instead. If instead of the above, I just run 2 separate processes of one thread each, then everything works fine. Also: 1.There is no static class or method used inside WorkerThread that could cause the problem. My application consists of only the worker thread class and contains no static fields or methods 2.The library is supposed to be usable in a multithreaded environment. In my thread I just create a new instance of a library's class and then call methods on it. Nothing more. My question is this: Without knowing any details of my implementation, is the above situation and facts enough to prove that there is a bug in the library and not in my programm? Is it safe to assume that the library inside uses a static method or object that is indirectly shared by my 2 threads and this causes the problem? If no then in what hypothetical situation could the bug originate in the worker class code?

    Read the article

  • STLport crash (race condition, Darwin only?)

    - by Jonas Byström
    When I run STLport on Darwin I get a strange crash. (Haven't seen it anywhere else than on Mac, but exactly same thing crash on both i686 and PowerPC.) This is what it looks like in gdb: Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: 13 at address: 0x0000000000000000 [Switching to process 21097] 0x000000010120f47c in stlp_std::__node_alloc_impl::_M_allocate () It may be some setting in STLport, I noticed that Mac.h and MacOSX.h seemed far behind on features. I also know that it it must be some type of race condition, since it doesn't occur just by calling this method (implicity called). The crash happens mainly when I push the system, running 10 simultaneous threads that do a lot of string handling. Other theories I come up with have to do with compiler flags (configure script) and g++ 4.2 bugs (seems like 4.4.3 isn't on Mac yet with Objective-C support, which I need to link with). HELP!!! :) Edit: I run unit tests, which do all sorts of things. This problem arise when I start 10 threads that push the system; and it always comes down to std::string::append which eventually boils down to _M_allocate. Since I can't even get a descent dump of the code that's causing the problem, I figure I'm doing something bad. Could it be so since it's trying to execute at instruction pointer 0x000...000? Are dynlibs built as DLLs in Windows with a jump table? Could it perhaps be that such a jump table has been overwritten for some reason? That would probably explain this behavior. (The code is huge, if I run out of other ideas, I'll post a minimum crashing sample here.)

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >