Search Results

Search found 5922 results on 237 pages for 'boost ptr container'.

Page 138/237 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Pattern for Accessing MySQL connection

    - by Dipan Mehta
    We have an application which is C++ trying to access MySQL database. There are several (about 5 or so) threads in the application (with Boost library for threading) and in each thread has a few objects, each of which is trying to access Database for its' own purpose. It has a simple ORM kind of model but that really is not an important factor here. There are three potential access patterns i can think of: There could be single connection object per application or thread and is shared between all (or group). The object needs to be thread safe and there will be contentions but MySQL will not be fired with too many connections. Every object could initiate connection on its own. The database needs to take care of concurrency (which i think MySQL can) and the design could be much simpler. There could be two possibilities here. a. either object keeps a persistent connection for its life OR b. object initiate connection as and when needed. To simplify the contention as in case of 1 and not to create too many sockets as in case of 2, we can have group/set based connections. So there could be there could be more than one connection (say N), each of this connection could be shared connection across M objects. Naturally, each of the pattern has different resource cost and would work under different constraints and objectives. What criteria should i use to choose the pattern of this for my own application? What are some of the advantages and disadvantages of each of these pattern over the other? Are there any other pattern which is better? PS: I have been through these questions: mysql, one connection vs multiple and MySQL with mutiple threads and processes But they don't quite answer exactly what i am trying to ask.

    Read the article

  • Kinect will recognise your finger movement

    - by Boonei
    Sources inside Microsoft suggest they MS guys are tying to improve motion controller. This could be a huge boost to gaming. There are quite a few things that we can do with our fingers when playing games like driving, shooting,sports etc. If fingers are captured then  XBox will give more realistic version of our avatar. Eurogamer has also suggested the same according to their sources. It would be only(mostly) a software update and would not require a new camera, because the USB controller interface currently in place in Kinect can take in data up-to 35MBps. The current utilization is only around roughly 1/2 of it. So there is currently a facility to send more data. Little more tech data, Kinect does transmit 320×240-pixel in 30 fps, if the device could capture and transfer at 640×480 pixels, then better resolution can detect more movements compared to current level of detection. Lets wait and watch ! This article titled,Kinect will recognise your finger movement, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • can these be made unambiguous

    - by R Samuel Klatchko
    I'm trying to create a set of overloaded templates for arrays/pointers where one template will be used when the compiler knows the size of the array and the other template will be used when it doesn't: template <typename T, size_t SZ> void moo(T (&arr)[SZ]) { ... } template <typename T> void moo(T *ptr) { ... } The problem is that when the compiler knows the size of the array, the overloads are ambiguous and the compile fails. Is there some way to resolve the ambiguity (perhaps via SFINAE) or is this just not possible.

    Read the article

  • Gmail rejects emails. Openspf.net fails the tests

    - by pablomedok
    I've got a problem with Gmail. It started after one of our trojan infected PCs sent spam for one day from our IP address. We've fixed the problem, but we got into 3 black lists. We've fixed that, too. But still every time we send an email to Gmail the message is rejected: So I've checked Google Bulk Sender's guide once again and found an error in our SPF record and fixed it. Google says everything should become fine after some time, but this doesn't happen. 3 weeks already passed but we still can't send emails to Gmail. Our MX setup is a bit complex, but not too much: We have a domain name delo-company.com, it has it's own mail @delo-company.com (this one is fine, but the problems are with sub-domain name corp.delo-company.com). Delo-company.com domain has several DNS records for the subdomain: corp A 82.209.198.147 corp MX 20 corp.delo-company.com corp.delo-company.com TXT "v=spf1 ip4:82.209.198.147 ~all" (I set ~all for testing purposes only, it was -all before that) These records are for our corporate Exchange 2003 server at 82.209.198.147. Its LAN name is s2.corp.delo-company.com so its HELO/EHLO greetings are also s2.corp.delo-company.com. To pass EHLO check we've also created some records in delo-company.com's DNS: s2.corp A 82.209.198.147 s2.corp.delo-company.com TXT "v=spf1 ip4:82.209.198.147 ~all" As I understand SPF verifications should be passed in this way: Out server s2 connects to MX of the recepient (Rcp.MX): EHLO s2.corp.delo-company.com Rcp.MX says Ok, and makes SPF check of HELO/EHLO. It does NSlookup for s2.corp.delo-company.com and gets the above DNS-records. TXT records says that s2.corp.delo-company.com should be only from IP 82.209.198.147. So it should be passed. Then our s2 server says RCPT FROM: Rcp.MX` server checks it, too. The values are the same so they should also be positive. Maybe there is also a rDNS check, but I'm not sure what is checked HELO or RCPT FROM. Our PTR record for 82.209.198.147 is: 147.198.209.82.in-addr.arpa. 86400 IN PTR s2.corp.delo-company.com. To me everything looks fine, but anyway all emails are rejected by Gmail. So, I've checked MXtoolbox.com - it says everything is fine, I passed http://www.kitterman.com/spf/validate.html Python check, I did 25port.com email test. It's fine, too: Return-Path: <[email protected]> Received: from s2.corp.delo-company.com (82.209.198.147) by verifier.port25.com id ha45na11u9cs for <[email protected]>; Fri, 2 Mar 2012 13:03:21 -0500 (envelope-from <[email protected]>) Authentication-Results: verifier.port25.com; spf=pass [email protected] Authentication-Results: verifier.port25.com; domainkeys=neutral (message not signed) [email protected] Authentication-Results: verifier.port25.com; dkim=neutral (message not signed) Authentication-Results: verifier.port25.com; sender-id=pass [email protected] Content-class: urn:content-classes:message MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----_=_NextPart_001_01CCF89E.BE02A069" Subject: test Date: Fri, 2 Mar 2012 21:03:15 +0300 X-MimeOLE: Produced By Microsoft Exchange V6.5 Message-ID: <[email protected]> X-MS-Has-Attach: X-MS-TNEF-Correlator: Thread-Topic: test Thread-Index: Acz4jS34oznvbyFQR4S5rXsNQFvTdg== From: =?koi8-r?B?89XQ0tXOwMsg8MHXxcw=?= <[email protected]> To: <[email protected]> I also checked with [email protected], but it FAILs all the time, no matter which SPF records I make: <s2.corp.delo-company.com #5.7.1 smtp;550 5.7.1 <[email protected]>: Recipient address rejected: SPF Tests: Mail-From Result="softfail": Mail From="[email protected]" HELO name="s2.corp.delo-company.com" HELO Result="softfail" Remote IP="82.209.198.147"> I've filled Gmail form twice, but nothing happens. We do not send spam, only emails for our clients. 2 or 3 times we did mass emails (like New Year Greetings and sales promos) from corp.delo-company.com addresses, but they where all complying to Gmail Bulk Sender's Guide (I mean SPF, Open Relays, Precedence: Bulk and Unsubscribe tags). So, this should be not a problem. Please, help me. What am I doing wrong? UPD: I also tried Unlocktheinbox.com test and the server also fails this test. Here is the result: http://bit.ly/wYr39h . Here is one more http://bit.ly/ypWLjr I also tried to send email from that server manually via telnet and everything is fine. Here is what I type: 220 mx.google.com ESMTP g15si4811326anb.170 HELO s2.corp.delo-company.com 250 mx.google.com at your service MAIL FROM: <[email protected]> 250 2.1.0 OK g15si4811326anb.170 RCPT TO: <[email protected]> 250 2.1.5 OK g15si4811326anb.170 DATA 354 Go ahead g15si4811326anb.170 From: [email protected] To: Pavel <[email protected]> Subject: Test 28 This is telnet test . 250 2.0.0 OK 1330795021 g15si4811326anb.170 QUIT 221 2.0.0 closing connection g15si4811326anb.170 And this is what I get: Delivered-To: [email protected] Received: by 10.227.132.73 with SMTP id a9csp96864wbt; Sat, 3 Mar 2012 09:17:02 -0800 (PST) Received: by 10.101.128.12 with SMTP id f12mr4837125ann.49.1330795021572; Sat, 03 Mar 2012 09:17:01 -0800 (PST) Return-Path: <[email protected]> Received: from s2.corp.delo-company.com (s2.corp.delo-company.com. [82.209.198.147]) by mx.google.com with SMTP id g15si4811326anb.170.2012.03.03.09.15.59; Sat, 03 Mar 2012 09:17:00 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 82.209.198.147 as permitted sender) client-ip=82.209.198.147; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 82.209.198.147 as permitted sender) [email protected] Date: Sat, 03 Mar 2012 09:17:00 -0800 (PST) Message-Id: <[email protected]> From: [email protected] To: Pavel <[email protected]> Subject: Test 28 This is telnet test

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Once mounted using TrueCrypt, cannot unmount

    - by zeiger
    I have an external HDD and use TrueCrypt for keeping encrypted file containers. After mounting, whenever I try to dismount a file container (using TrueCrypt 7.0a on Ubuntu 11.04), it just does not happen and I get the following message: device-mapper: remove ioctl failed: Device or resource busy Command failed Further, if I close TrueCrypt and then try to start it again, it says that TrueCrypt is already running, but I cannot access it from the Unity sidebar (because it is not there). Also, if I power down my external HDD, the TrueCrypt volume still shows as one of the mounted volumes, but I cannot do anything with it. Any possible work around? I remember this NOT happening in earlier versions of Ubuntu so I am guessing there is something to do with 11.04. Thanks

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Accorian menu is not working properly

    - by Zerotoinfinite
    Hi, I am using accordin menu in my site, placed in Master Page, It has 4 pane, each pane have multiple links. it was working fine previously. But suddenly it is displaying this error message There is no source code available for the current location. It's is asking for show assembly. When I clicked on that I got a stopage at the stack 0000001a mov eax,dword ptr [ebp-0Ch] -- I am really not sure what does it means. Also, there are several links in accordin menu, after clicking on anyone of the link I am not able to click the other link, might be because of the above reason. Please help

    Read the article

  • Customer Concepts TV

    - by Richard Lefebvre
    Eliminate the Guesswork in Your Customer's Sales Organization Selling is the lifeblood of every business. In the past, companies would increase headcount to boost sales. In today’s business environment, companies need to re-evaluate the way in which they sell. Sales and marketing organisations must optimise performance, increase team productivity and focus on the best opportunities. Oracle Fusion CRM has been specifically designed with tools to help sales and marketing teams improve efficiency and drive revenue. Territory modeling and management, quota and commission management, collaborative features, real-time customer information and mobile device integration are just some features incorporated. Join us on Customer Concepts TV as we aim to help you find the right strategy for your prospect and customers. Whether they already have a CRM solution in place or are looking for the next level of CRM implementation, this online TV show will give you very practical advice that can help you to make the most out of your CRM implementation.Register now to reserve your spot for this exclusive, live-stream event. Customer Concepts TV comes to you on April 24. Watch the Customer Concepts TV trailer here

    Read the article

  • How do you cope with change in open source frameworks that you use for your projects?

    - by Amy
    It may be a personal quirk of mine, but I like keeping code in living projects up to date - including the libraries/frameworks that they use. Part of it is that I believe a web app is more secure if it is fully patched and up to date. Part of it is just a touch of obsessive compulsiveness on my part. Over the past seven months, we have done a major rewrite of our software. We dropped the Xaraya framework, which was slow and essentially dead as a product, and converted to Cake PHP. (We chose Cake because it gave us the chance to do a very rapid rewrite of our software, and enough of a performance boost over Xaraya to make it worth our while.) We implemented unit testing with SimpleTest, and followed all the file and database naming conventions, etc. Cake is now being updated to 2.0. And, there doesn't seem to be a viable migration path for an upgrade. The naming conventions for files have radically changed, and they dropped SimpleTest in favor of PHPUnit. This is pretty much going to force us to stay on the 1.3 branch because, unless there is some sort of conversion tool, it's not going to be possible to update Cake and then gradually improve our legacy code to reap the benefits of the new Cake framework. So, as usual, we are going to end up with an old framework in our Subversion repository and just patch it ourselves as needed. And this is what gets me every time. So many open source products don't make it easy enough to keep projects based on them up to date. When the devs start playing with a new shiny toy, a few critical patches will be done to older branches, but most of their focus is going to be on the new code base. How do you deal with radical changes in the open source projects that you use? And, if you are developing an open source product, do you keep upgrade paths in mind when you develop new versions?

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • Doing powerups in a component-based system

    - by deft_code
    I'm just starting really getting my head around component based design. I don't know what the "right" way to do this is. Here's the scenario. The player can equip a shield. The the shield is drawn as bubble around the player, it has a separate collision shape, and reduces the damage the player receives from area effects. How is such a shield architected in a component based game? Where I get confused is that the shield obviously has three components associated with it. Damage reduction / filtering A sprite A collider. To make it worse different shield variations could have even more behaviors, all of which could be components: boost player maximum health health regen projectile deflection etc Am I overthinking this? Should the shield just be a super component? I really think this is wrong answer. So if you think this is the way to go please explain. Should the shield be its own entity that tracks the location of the player? That might make it hard to implement the damage filtering. It also kinda blurs the lines between attached components and entities. Should the shield be a component that houses other components? I've never seen or heard of anything like this, but maybe it's common and I'm just not deep enough yet. Should the shield just be a set of components that get added to the player? Possibly with an extra component to manage the others, e.g. so they can all be removed as a group. (accidentally leave behind the damage reduction component, now that would be fun). Something else that's obvious to someone with more component experience?

    Read the article

  • Today's Links (6/17/2011)

    - by Bob Rhubart
    Call for Nominations: Oracle Eco-Enterprise Innovation Awards Is your organization using Oracle products to reduce your environmental footprint while reducing costs? If so, submit your nomination for Oracle's Eco-Enterprise Innovation award. These awards will be presented to select customers and their partners who are using any of Oracle's products to not only take an environmental lead, but also to reduce their costs and improve their business efficiencies by using green business practices. Beyond The Data Grid: Coherence, Normalization, Joins, and Linear Scalability | Ben Stopford Ben Stopford presents ODC, a highly distributed in-memory normalized NoSQL datastore designed for scalability, based on normalized data, Snowflake Schema, and Connected Replication pattern. Upgrading ALSB services to OSB | John Chin-a-Woeng John Chin-a-Woeng walks you through the upgrade from Aqualogic Service Bus (ALSB 3.0) to Oracle Service Bus (OSB 10.3). SOA & Middleware: Pinning tasks to a user in BPM 11g | Niall Commiskey Commiskey illustrates a scenario. JDeveloper 11gR2: New option Test WebService in WSDL editor | Lucas Jellema The "Test WebService" button in the WSDL Editor in JDeveloper 11gr2 is "just a little feature addition," says Oracle ACE Director Lucas Jellema. "But it can be quite useful all the same." Enterprise Business Intelligence 11g Seminar with Mark Rittman Oracle ACE Director Mark Rittman conducts a two-day course for Oracle University, in Dublin, IE, July 4-5, 2011. Data Integration Webcast Series Join Oracle experts for a series covering our data integration solutions. You’ll get invaluable information to help boost your data infrastructure so that you can accelerate your business.

    Read the article

  • The Science Behind Salty Airline Food

    - by Jason Fitzpatrick
    In this collection, Artist Signe Emma combines a scientific overview of the role salt plays in airline food with electron microscope scans of salt crystals arranged to look like the views from an airplane–a rather clever and visually stunning way to deliver the message. Attached to the collection is this explaination of why airlines load their snacks and meals with salt: White noise consists of a random collection of sounds at different frequencies and scientists have demonstrated that it is capable of diminishing the taste of salt. At low-pressure conditions, higher taste and odour thresholds of flavourings are generally observed. At 30.000 feet the cabin humidity drops by 15%, and the lowered air pressure forces bodily fluids upwards. With less humidity, people have less moisture in their throat, which slows the transport of odours to the brains smell and taste receptors. That means that if a meal should taste the same up in the air, as on ground it needs 30% of extra salt. To combat the double assault on our sense of taste, the airlines boost the salt content to compensate. For more neat microscope scans as high-altitude view photographs, hit up the link below. How to Play Classic Arcade Games On Your PC How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8

    Read the article

  • What does: throw 0 do/mean? Is it "bad"?

    - by MartinP
    Context I came across some code, like this: if( Some_Condition ) throw 0; I googled a bit, and found a few other code snippets using that odd looking throw 0 form. I presume one would catch this as: catch(const int& e) { } Or is this a NULL ptr? to be caught as void* ? Question What does this throw 0 do? Is it special in some way? My normal preference would be to throw something that is (or derived from) std::exception. So to me this looks "bad". Is it "bad" ?

    Read the article

  • How to Convert DMG Files to ISO Files on Windows

    - by Taylor Gibb
    The DMG image format is by far the most popular file container format used to distribute software on Mac OS X. Here’s how to convert a DMG file into an ISO file that can be mounted on a Windows PC. First head over to this website and grab yourself a copy of dmg2img by clicking on the win32 binary link. Once the file has downloaded, open your Downloads folder, right click on the file, and select extract all from the context menu. The Best Free Portable Apps for Your Flash Drive Toolkit How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC

    Read the article

  • Performance of concurrent software on multicore processors

    - by Giorgio
    Recently I have often read that, since the trend is to build processors with multiple cores, it will be increasingly important to have programming languages that support concurrent programming in order to better exploit the parallelism offered by these processors. In this respect, certain programming paradigms or models are considered well-suited for writing robust concurrent software: Functional programming languages, e.g. Haskell, Scala, etc. The actor model: Erlang, but also available for Scala / Java (Akka), C++ (Theron, Casablanca, ...), and other programming languages. My questions: What is the state of the art regarding the development of concurrent applications (e.g. using multi-threading) using the above languages / models? Is this area still being explored or are there well-established practices already? Will it be more complex to program applications with a higher level of concurrency, or is it just a matter of learning new paradigms and practices? How does the performance of highly concurrent software compare to the performance of more traditional software when executed on multiple core processors? For example, has anyone implemented a desktop application using C++ / Theron, or Java / Akka? Was there a boost in performance on a multiple core processor due to higher parallelism?

    Read the article

  • Virtual Function Implementation

    - by Gokul
    Hi, I have kept hearing this statement. Switch..Case is Evil for code maintenance, but it provides better performance(since compiler can inline stuffs etc..). Virtual functions are very good for code maintenance, but they incur a performance penalty of two pointer indirections. Say i have a base class with 2 subclasses(X and Y) and one virtual function, so there will be two virtual tables. The object has a pointer, based on which it will choose a virtual table. So for the compiler, it is more like switch( object's function ptr ) { case 0x....: X->call(); break; case 0x....: Y->call(); }; So why should virtual function cost more, if it can get implemented this way, as the compiler can do the same in-lining and other stuff here. Or explain me, why is it decided not to implement the virtual function execution in this way? Thanks, Gokul.

    Read the article

  • Is it safe to catch an access violation in this scenario?

    - by Eloff
    I've read a lot, including here on SO that suggests this is a very bad idea in general and that the only thing you can do safely is exit the program. I'm not sure that this is true. This is for a pooling memory allocator that hands off large allocations to malloc. During pool_free() a pointer needs to be checked it it belongs to a pool or was allocated with malloc. By rounding the address down to the nearest 1MB boundary, I get a pointer to the beginning of a block of memory in the pool, or undefined if malloc was used. In the first case I can easily verify that the block of memory belongs to the pool, but, if it does not I will either fail this verification, or I will get an access violation (note that this is a read-only process). Could I not catch this with SEH (Windows) or handle the signal (POSIX) and simply treat it as a failed verification? (i.e. this is only possible if malloc was used, so pass the ptr to free())

    Read the article

  • How to use Pixel Bender (pbj) in ActionScript3 on large Vectors to make fast calculations?

    - by Arthur Wulf White
    Remember my old question: 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing? I figured I could use a Pixel Bender Shader to do the computation for any large group of elements in a game to save on processing time. At least it's a theory worth checking. I also read this question: Pass large array to pixel shader Which I'm guessing is about accomplishing the same thing in a different language. I read this tutorial: http://unitzeroone.com/blog/2009/03/18/flash-10-massive-amounts-of-3d-particles-with-alchemy-source-included/ I am attempting to do some tests. Here is some of the code: private const SIZE : int = Math.pow(10, 5); private var testVectorNum : Vector.<Number>; private function testShader():void { shader.data.ab.value = [1.0, 8.0]; shader.data.src.input = testVectorNum; shader.data.src.width = SIZE/400; shader.data.src.height = 100; shaderJob = new ShaderJob(shader, testVectorNum, SIZE / 4, 1); var time : int = getTimer(), i : int = 0; shaderJob.start(true); trace("TEST1 : ", getTimer() - time); } The problem is that I keep getting a error saying: [Fault] exception, information=Error: Error #1000: The system is out of memory. Update: I managed to partially workaround the problem by converting the vector into bitmapData: (Using this technique I still get a speed boost of 3x using Pixel Bender) private function testShader():void { shader.data.ab.value = [1.0, 8.0]; var time : int = getTimer(), i : int = 0; testBitmapData.setVector(testBitmapData.rect, testVectorInt); shader.data.src.input = testBitmapData; shaderJob = new ShaderJob(shader, testBitmapData); shaderJob.start(true); testVectorInt = testBitmapData.getVector(testBitmapData.rect); trace("TEST1 : ", getTimer() - time); }

    Read the article

  • VMware sharing folders between Win7 (host) and Ubuntu (guest)

    - by Pawel Goscicki
    What is the best way to share a folder between Win7 64bit (host) and Ubuntu 10.10 (guest) in VMware player? I can setup the sharing just fine (using vmware-tools), but all shared files are root:root owned with 777 permission set. Which, well, sucks. What can I do to have shared files with preserved ownership and permissions? I'm guessing I would need some kind of a file container, that would get mounted in Ubuntu as a block device (if so, it would need to be dynamic, i.e. expand with size of contained files). But maybe there is a better solution?

    Read the article

  • Performing client-side OAuth authorized Twitter API calls versus server side, how much of a difference is there in terms of performance?

    - by Terence Ponce
    I'm working on a Twitter application in Ruby on Rails. One of the biggest arguments that I have with other people on the project is the method of calling the Twitter API. Before, everything was done on the server: OAuth login, updating the user's Twitter data, and retrieving tweets. Retrieving tweets was the heaviest thing to do since we don't store the tweets in our database, so viewing the tweets means that we have to call the API every time. One of the people in the project suggested that we call the tweets through Javascript instead to lessen the load on the server. We used GET search, which, correct me if I'm wrong, will be removed when v1.0 becomes completely deprecated, but that really isn't a concern now. When the Twitter API has migrated completely to v1.1 (again, correct me if I'm wrong), every calls to the API must be authenticated, so we have to authenticate our Javascript requests to the API. As said here: We don't support or recommend performing OAuth directly through Javascript -- it's insecure and puts your application at risk. The only acceptable way to perform it is if you kept all keys and secrets server-side, computed the OAuth signatures and parameters server side, then issued the request client-side from the server-generated OAuth values. If we do exactly what Twitter suggests, the only difference between this and doing everything server-side is that our server won't have to contact the Twitter API anymore every time the user wants to view tweets. Here's how I would picture what's happening every time the user makes a request: If we do it through Javascript, it would be harder on my part because I would have to create the signatures manually for every request, but I will gladly do it if the boost in performance is worth all the trouble. Doing it through Ruby on Rails would be very easy since the Twitter gem does most of the grunt work already, so I'm really encouraging the other people in the project to agree with me. Is the difference in performance trivial or is it significant enough to switch to Javascript?

    Read the article

  • Centered Content using panelGridLayout

    - by Duncan Mills
    A classic layout conundrum,  which I think pretty much every ADF developer may have faced at some time or other, is that of truly centered (centred) layout. Typically this requirement comes up in relation to say displaying a login type screen or similar. Superficially the  problem seems easy, but as my buddy Eduardo explained when discussing this subject a couple of years ago it's actually a little more complex than you might have thought. If fact, even the "solution" provided in that posting is not perfect and suffers from a several issues (not Eduardo's fault, just limitations of panelStretch!) The top, bottom, end and start facets all need something in them The percentages you apply to the topHeight, startWidth etc. are calculated as part of the whole width.  This means that you have to guestimate the correct percentage based on your typical screen size and the sizing of the centered content. So, at best, you will in fact only get approximate centering, and the more you tune that centering for a particular browser size the more it will fail if the user resizes. You can't attach styles to the panelStretchLayout facets so to provide things like background color or fixed sizing you need to embed another container that you can apply styles to, typically a panelgroupLayout   For reference here's the code to print a simple 100px x 100px red centered square  using the panelStretchLayout solution, approximately tuned to a 1980 x 1080 maximized browser (IDs omitted for brevity): <af:panelStretchLayout startWidth="45%" endWidth="45%"                        topHeight="45%"  bottomHeight="45%" >   <f:facet name="center">     <af:panelGroupLayout inlineStyle="height:100px;width:100px;background-color:red;"                          layout="vertical"/>   </f:facet>   <f:facet name="top">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="bottom">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="start">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="end">     <af:spacer height="1" width="1"/>    </f:facet> </af:panelStretchLayout>  And so to panelGridLayout  So here's the  good news, panelGridLayout makes this really easy and it works without the caveats above.  The key point is that percentages used in the grid definition are evaluated after the fixed sizes are taken into account, so rather than having to guestimate what percentage will "more, or less", center the content you can just say "allocate half of what's left" to the flexible content and you're done. Here's the same example using panelGridLayout: <af:panelGridLayout> <af:gridRow height="50%"/> <af:gridRow height="100px"> <af:gridCell width="50%" /> <af:gridCell width="100px" halign="stretch" valign="stretch"  inlineStyle="background-color:red;"> <af:spacer width="1" height="1"/> </af:gridCell> <af:gridCell width="50%" /> </af:gridRow> <af:gridRow height="50%"/> </af:panelGridLayout>  So you can see that the amount of markup is somewhat smaller (as is, I should mention, the generated DOM structure in the browser), mainly because we don't need to introduce artificial components to ensure that facets are actually observed in the final result.  But the key thing here is that the centering is no longer approximate and it will work as expected as the user resizes the browser screen.  By far this is a more satisfactory solution and although it's only a simple example, it will hopefully open your eyes to the potential of panelGridLayout as your number one, go-to layout container. Just a reminder though, right now, panelGridLayout is only available in 11.1.2.2 and above.

    Read the article

  • can these templates be made unambiguous

    - by R Samuel Klatchko
    I'm trying to create a set of overloaded templates for arrays/pointers where one template will be used when the compiler knows the size of the array and the other template will be used when it doesn't: template <typename T, size_t SZ> void moo(T (&arr)[SZ]) { ... } template <typename T> void moo(T *ptr) { ... } The problem is that when the compiler knows the size of the array, the overloads are ambiguous and the compile fails. Is there some way to resolve the ambiguity (perhaps via SFINAE) or is this just not possible.

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >