Search Results

Search found 2061 results on 83 pages for 'fixes'.

Page 64/83 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • At the Java DEMOgrounds - Oracle Java ME Embedded Enables the “Internet of Things”

    - by Janice J. Heiss
    I caught up with Oracle’s Robert Barnes, Senior Director, Java Product Management, who was demonstrating a new product from Oracle’s Java Platform, Micro Edition (Java ME) product portfolio, Oracle Java ME Embedded 3.2, a complete client Java runtime optimized for microcontrollers and other resource-constrained devices. Oracle’s Java ME Embedded 3.2 is a Java ME runtime based on CLDC 1.1 (JSR-139) and IMP-NG (JSR-228).“What we are showing here is the Java ME Embedded 3.2 that we announced last week,” explained Barnes. “It’s the start of the 'Internet of Things,’ in which you have very very small devices that are on the edge of the network where the sensors sit. You often have a middle area called a gateway or a concentrator which is fairly middle to higher performance. On the back end you have a very high performance server. What this is showing is Java spanning all the way from the server side right down towards the type of chip that you will get at the sensor side as the network.” Barnes explained that he had two different demos running.The first, called the Solar Panel System Demo, measures the brightness of the light.  “This,” said Barnes, “is a light source demo with a Cortex M3 controlling the motor, on the end of which is a sensor which is measuring the brightness of the lamp. This is recording the data of the brightness of the lamp and as we move the lamp out of the way, we should be able using the server to turn the sensor towards the lamp so the brightness reading will go higher. This sends the message back to the server and we can look at the web server sitting on the PC underneath the desk. We can actually see the data being passed back effectively through a back office type of function within a utility environment.” The second demo, the Smart Grid Response Demo, Barnes explained, “has the same board and processor and is still using Java ME embedded with a different app on top. This is a demand response demo. What we are seeing within the managing environment is that people want to track the pricing signals of the electricity. If it’s particularly expensive at any point in time, they may turn something off. This demo sets the price of the electricity as though this is coming from the back of the server sending pricing signals to my home.” The demo had a lamp and a fan and it was tracking the price of electricity. “If I set the price of the electricity to go over 5 cents, then the device will turn off,” explained Barnes. “I can go into my settings and, in this case, change the price to 50 cents and we can wait a minus and the lamp will go off. When I change the pricing signal so that it is lower, the lamp will come back on. The key point is that the Java software we have running is the same across all the different devices; it’s a way to build applications across multiple devices using the same software. This is important because it fixes peak loading on the network and can stops blackouts.” This demo brought me back to a prior decade when Sun Microsystems first promoted  Jini technology, a version of Java that would put everything on the network and give us the smart home. Your home would be automated to tell you when you were out of milk, when to change your light bulbs, etc. You would have access to the web and the network throughout your home.It’s interesting to see how technology moves over time – from the smart home to the Internet of Things.

    Read the article

  • Gaming on Cloud

    - by technomad
    Sometimes I wonder the pundits of cloud computing are way to consumed with the enterprise applications. With all the CAPEX / OPEX, ROI-talk taking the center stage, an opportunity to affect masses directly is getting overlooked. I am a self proclaimed die hard gamer. I come from the generation of gamers who started their journey in DOS games like Wolfenstein 3D and Allan Border Cricket (the latter is still a favorite pastime). In the late 90s, a revolution called accelerated graphics started in DirectX and OpenGL. Games got more advanced. Likes of Quake III and Unreal Tournament became the crown jewels of the industry. But with all these advancements, there started a race. A race of GFX giants ATI and NVIDIA to beat each other for better frame and image quality. Revisions to the graphics chipsets became frequent. Games became eye candies but at the cost of more GPU power / memory. Every eagerly awaited title started demanding more muscle power in graphics and PC hardware. Latest games and all the liquid smooth frame rates became the territory of the once with deep pockets who could spend lavishly on latest hardware. Enthusiasts like yours truly, who couldn’t afford this route, started exploring over-clocking, optimized hardware cooling... etc. to pursue the passion. Ever rising cost of hardware requirements lead to rampant piracy of PC games. Gamers were willing to spend on the latest titles, but the ones with tight budget prefer hardware upgrades against a legal copy of the game. It was also fueled by emergence of the P2P file sharing networks. Then came the era of Xbox and PS3s. It solved the major issue of hardware standardization and provided an alternative to ever increasing hardware costs. I have always admired these consoles, but being born and brought up in a keyboard/mouse environment, I still find it difficult to play first person shooters with a gamepad. I leave the topic of PC v/s Consol gaming for another day, but the bottom line is… PC gamers deserve an equally democratized solution. This is where I think Cloud Computing can come to rescue. It can minimize hardware requirements. Virtually end the software piracy and rationalize costs for gamers. Subscription based models like pay-as-you-play. In game rewards, like extended subscription credits for exceptional gamers (oh yes, I have beaten Xaero on nightmare in Quake III, time and again!) Easy deployment for patches and fixes. Better game AI. The list goes on and on… Fortunately, companies like OnLive are thinking in the same direction. Their gaming service is all set to launch on 17th June 2010 in E3 2010 expo in L.A. I wish them all the luck. I hope they will start a trend which will bring the smiles back on the face of budget gamers with the help of cloud computing.

    Read the article

  • Running Powershell from within SharePoint

    - by Norgean
    Just because something is a daft idea, doesn't mean it can't be done. We sometimes need to do some housekeeping - like delete old files or list items or… yes, well, whatever you use Powershell for in a SharePoint world. Or it could be that your solution has "issues" for which you have Powershell solutions, but not the budget to transform into proper bug fixes. So you create a "how to" for the ITPro guys. Idea: What if we keep the scripts in a list, and have SharePoint execute the scripts on demand? An announcements list (because of the multiline body field). Warning! Let us be clear. This list needs to be locked down; if somebody creates a malicious script and you run it, I cannot help you. First; we need to figure out how to start Powershell scripts from C#. Hit teh interwebs and the Googlie, and you may find jpmik's post: http://www.codeproject.com/Articles/18229/How-to-run-PowerShell-scripts-from-C. (Or MS' official answer at http://msdn.microsoft.com/en-us/library/ee706563(v=vs.85).aspx) public string RunPowershell(string powershellText, SPWeb web, string param1, string param2) { // Powershell ~= RunspaceFactory - i.e. Create a powershell context var runspace = RunspaceFactory.CreateRunspace(); var resultString = new StringBuilder(); try { // load the SharePoint snapin - Note: you cannot do this in the script itself (i.e. add-pssnapin etc does not work) PSSnapInException snapInError; runspace.RunspaceConfiguration.AddPSSnapIn("Microsoft.SharePoint.PowerShell", out snapInError); runspace.Open(); // set a web variable. runspace.SessionStateProxy.SetVariable("webContext", web); // and some user defined parameters runspace.SessionStateProxy.SetVariable("param1", param1); runspace.SessionStateProxy.SetVariable("param2", param2); var pipeline = runspace.CreatePipeline(); pipeline.Commands.AddScript(powershellText); // add a "return" variable pipeline.Commands.Add("Out-String"); // execute! var results = pipeline.Invoke(); // convert the script result into a single string foreach (PSObject obj in results) { resultString.AppendLine(obj.ToString()); } } finally { // close the runspace runspace.Close(); } // consider logging the result. Or something. return resultString.ToString(); } Ok. We've written some code. Let us test it. var runner = new PowershellRunner(); runner.RunPowershellScript(@" $web = Get-SPWeb 'http://server/web' # or $webContext $web.Title = $param1 $web.Update() $web.Dispose() ", null, "New title", "not used"); Next step: Connect the code to the list, or more specifically, have the code execute on one (or several) list items. As there are more options than readers, I'll leave this as an exercise for the reader. Some alternatives: Create a ribbon button that calls RunPowershell with the body of the selected itemsAdd a layout pageSpecify list item from query string (possibly coupled with content editor webpart with html that links directly to this page with querystring)WebpartListing with an "execute" columnList with multiselect and an execute button Etc!Now that you have the code for executing powershell scripts, you can easily expand this into a timer job, which executes scripts at regular intervals. But if the previous solution was dangerous, this is even worse - the scripts will usually be run with one of the admin accounts, and can do pretty much anything...One more thing... Note that as this is running "consoleless" calls to Write-Host will fail. Two solutions; remove all output, or check if the script is run in a console-window or not.  if ($host.Name -eq "ConsoleHost") { Write-Host 'If I agreed with you we'd both be wrong' }

    Read the article

  • Randomely loosing wireless connexion with Cubuntu 12.04

    - by statquant
    I am presently experiencing random disconnections from my wireless network. It looks like it is more and more frequent (however I have not seen any clear pattern). This is killing me... Here is some information that should help (from ubuntu forums). Thanks for reading Machine : Acer Aspire S3 statquant@euclide:~$ lsb_release -d Description: Ubuntu 12.04.1 LTS statquant@euclide:~$ uname -mr 3.2.0-33-generic x86_64 statquant@euclide:~$ sudo /etc/init.d/networking restart * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... statquant@euclide:~$ lspci 02:00.0 Network controller: Atheros Communications Inc. AR9485 Wireless Network Adapter (rev 01) statquant@euclide:~$ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 004: ID 064e:c321 Suyin Corp. Bus 002 Device 003: ID 0bda:0129 Realtek Semiconductor Corp. statquant@euclide:~$ ifconfig wlan0 Link encap:Ethernet HWaddr 74:de:2b:dd:c4:78 inet addr:192.168.1.3 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::76de:2bff:fedd:c478/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:913 errors:0 dropped:0 overruns:0 frame:0 TX packets:802 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:873218 (873.2 KB) TX bytes:125826 (125.8 KB) statquant@euclide:~$ iwconfig wlan0 IEEE 802.11bgn ESSID:"Bbox-D646D1" Mode:Managed Frequency:2.437 GHz Access Point: 00:19:70:80:01:6C Bit Rate=65 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:on Link Quality=56/70 Signal level=-54 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:71 Missed beacon:0 statquant@euclide:~$ dmesg | grep "wlan" [ 17.495866] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 17.498950] ADDRCONF(NETDEV_UP): wlan0: link is not ready [ 20.072015] wlan0: authenticate with 00:19:70:80:01:6c (try 1) [ 20.269853] wlan0: authenticate with 00:19:70:80:01:6c (try 2) [ 20.272386] wlan0: authenticated [ 20.298682] wlan0: associate with 00:19:70:80:01:6c (try 1) [ 20.302321] wlan0: RX AssocResp from 00:19:70:80:01:6c (capab=0x431 status=0 aid=1) [ 20.302325] wlan0: associated [ 20.307307] ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready [ 30.402292] wlan0: no IPv6 routers present statquant@euclide:~$ sudo lshw -C network [sudo] password for statquant: *-network description: Wireless interface product: AR9485 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 01 serial: 74:de:2b:dd:c4:78 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list rom ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-33-generic firmware=N/A ip=192.168.1.3 latency=0 link=yes multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:c0400000-c047ffff memory:afb00000-afb0ffff statquant@euclide:~$ iwlist scan wlan0 Scan completed : Cell 01 - Address: 00:19:70:80:01:6C Channel:6 Frequency:2.437 GHz (Channel 6) Quality=56/70 Signal level=-54 dBm Encryption key:on ESSID:"Bbox-D646D1" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 6 Mb/s 9 Mb/s; 12 Mb/s; 18 Mb/s Bit Rates:24 Mb/s; 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000125fb152bb Extra: Last beacon: 40020ms ago IE: Unknown: 000B42626F782D443634364431 IE: Unknown: 010882848B960C121824 IE: Unknown: 030106 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: WPA Version 1 Group Cipher : TKIP Pairwise Ciphers (2) : CCMP TKIP Authentication Suites (1) : PSK IE: Unknown: 2A0100 IE: Unknown: 32043048606C IE: Unknown: DD180050F2020101820003A4000027A4000042435E0062322F00 IE: Unknown: 2D1A4C101BFF00000000000000000000000000000000000000000000 IE: Unknown: 3D1606080800000000000000000000000000000000000000 IE: Unknown: DD0900037F01010000FF7F IE: Unknown: DD0A00037F04010000000000 And... finally, please note that I did the following (after looking for fixes of similar problems), but unfortunately it did not work sudo modprobe -r iwlwifi sudo modprobe iwlwifi 11n_disable=1

    Read the article

  • Moving StarterSTS to the (Azure) Cloud

    - by Your DisplayName here!
    Quite some people asked me about an Azure version of StarterSTS. While I kinda knew what I had to do to make the move, I couldn’t find the time. Until recently. This blog post briefly documents the necessary changes and design decisions for the next version of StarterSTS which will work both on-premise and on Azure. Provider Fortunately StarterSTS is already based on the idea of “providers”. Authentication, roles and claims generation is based on the standard ASP.NET provider infrastructure. This makes the migration to different data stores less painful. In my case I simply moved the ASP.NET provider database to SQL Azure and still use the standard SQL Server based membership, roles and profile provider. In addition StarterSTS has its own providers to abstract resource access for certificates, relying party registration, client certificate registration and delegation. So I only had to provide new implementations. Signing and SSL keys now go in the Azure certificate store and user mappings (client certificates and delegation settings) have been moved to Azure table storage. The one thing I didn’t anticipate when I originally wrote StarterSTS was the need to also encapsulate configuration. Currently configuration is “locked” to the standard .NET configuration system. The new version will have a pluggable SettingsProvider with versions for .NET configuration as well as Azure service configuration. If you want to externalize these settings into e.g. a database, it is now just a matter of supplying a corresponding provider. Moving between the on-premise and Azure version will be just a matter of using different providers. URL Handling Another thing that’s substantially different on Azure (and load balanced scenarios in general) is the handling of URLs. In farm scenarios, the standard APIs like ASP.NET’s Request.Url return the current (internal) machine name, but you typically need the address of the external facing load balancer. There’s a hotfix for WCF 3.5 (included in v4) that fixes this for WCF metadata. This was accomplished by using the HTTP Host header to generate URLs instead of the local machine name. I now use the same approach for generating WS-Federation metadata as well as information card files. New Features I introduced a cache provider. Since we now have slightly more expensive lookups (e.g. relying party data from table storage), it makes sense to cache certain data in the front end. The default implementation uses the ASP.NET web cache and can be easily extended to use products like memcached or AppFabric Caching. Starting with the relying party provider, I now also provide a read/write interface. This allows building management interfaces on top of this provider. I also include a (very) simple web page that allows working with the relying party provider data. I guess I will use the same approach for other providers in the future as well. I am also doing some work on the tracing and health monitoring area. Especially important for the Azure version. Stay tuned.

    Read the article

  • Violation of the DRY Principle

    - by Onorio Catenacci
    I am sure there's a name for this anti-pattern somewhere; however I am not familiar enough with the anti-pattern literature to know it. Consider the following scenario: or0 is a member function in a class. For better or worse, it's heavily dependent on class member variables. Programmer A comes along and needs functionality like or0 but rather than calling or0, Programmer A copies and renames the entire class. I'm guessing that she doesn't call or0 because, as I say, it's heavily dependent on member variables for its functionality. Or maybe she's a junior programmer and doesn't know how to call it from other code. So now we've got or0 and c0 (c for copy). I can't completely fault Programmer A for this approach--we all get under tight deadlines and we hack code to get work done. Several programmers maintain or0 so it's now version orN. c0 is now version cN. Unfortunately most of the programmers that maintained the class containing or0 seemed to be completely unaware of c0--which is one of the strongest arguments I can think of for the wisdom of the DRY principle. And there may also have been independent maintainance of the code in c. Either way it appears that or0 and c0 were maintained independent of each other. And, joy and happiness, an error is occurring in cN that does not occur in orN. So I have a few questions: 1.) Is there a name for this anti-pattern? I've seen this happen so often I'd find it hard to believe this is not a named anti-pattern. 2.) I can see a few alternatives: a.) Fix orN to take a parameter that specifies the values of all the member variables it needs. Then modify cN to call orN with all of the needed parameters passed in. b.) Try to manually port fixes from orN to cN. (Mind you I don't want to do this but it is a realistic possibility.) c.) Recopy orN to cN--again, yuck but I list it for sake of completeness. d.) Try to figure out where cN is broken and then repair it independently of orN. Alternative a seems like the best fix in the long term but I doubt the customer will let me implement it. Never time or money to fix things right but always time and money to repair the same problem 40 or 50 times, right? Can anyone suggest other approaches I may not have considered? If you were in my place, which approach would you take? If there are other questions and answers here along these lines, please post links to them. I don't mind removing this question if it's a dupe but my searching hasn't turned up anything that addresses this question yet. EDIT: Thanks everyone for all the thoughtful responses. I asked about a name for the anti-pattern so I could research it further on my own. I'm surprised this particular bad coding practice doesn't seem to have a "canonical" name for it.

    Read the article

  • Using HTML5 Today part 3&ndash; Using Polyfills

    - by Steve Albers
    Shims helps when adding semantic tags to older IE browsers, but there is a huge range of other new HTML5 features that having varying support on browsers.  Polyfills are JavaScript code and/or browser plug-ins that can provide older or less featured browsers with API support.  The best polyfills will detect the whether the current browser has native support, and only adds the functionality if necessary.  The Douglas Crockford JSON2.js library is an example of this approach: if the browser already supports the JSON object, nothing changes.  If JSON is not available, the library adds a JSON property in the global object. This approach provides some big benefits: It lets you add great new HTML5 features to your web sites sooner. It lets the developer focus on writing to the up-and-coming standard rather than proprietary APIs. Where most one-off legacy code fixes tends to break down over time, well done polyfills will stop executing over time (as customer browsers natively support the feature) meaning polyfill code may not need to be tested against new browsers since they will execute the native methods instead. Your should also remember that Polyfills represent an entirely separate code path (and sometimes plug-in) that requires testing for support.  Also Polyfills tend to run on older browsers, which often have slower JavaScript performance.  As a result you might find that performance on older browsers is not comparable. When looking for Polyfills you can start by checking the Modernizr GitHub wiki or the HTML5 Please site. For an example of a polyfill consider a page that writes a few geometric shapes on a <canvas> <script src="jquery-1.7.1.min.js"><script> <script> $(document).ready(function () { drawCanvas(); }); function drawCanvas() { var context = $("canvas")[0].getContext('2d'); //background context.fillStyle = "#8B0000"; context.fillRect(5, 5, 300, 100); // emptybox context.strokeStyle = "#B0C4DE"; context.lineWidth = 4; context.strokeRect(20, 15, 80, 80); // circle context.arc(160, 55, 40, 0, Math.PI * 2, false); context.fillStyle = "#4B0082"; context.fill(); </script>   The result is a simple static canvas with a box & a circle:   …to enable this functionality on a pre-canvas browser we can find a polyfill.  A check on html5please.com references  FlashCanvas.  Pull down the zip and extract the files (flashcanvas.js, flash10canvas.swf, etc) to a directory on your site.  Then based on the documentation you need to add a single line to your original HTML file: <!--[if lt IE 9]><script src="flashcanvas.js"></script><![endif]—> …and you have canvas functionality!  The IE conditional comments ensure that the library is only loaded in browsers where it is useful, improving page load & processing time. Like all Polyfills, you should test to verify the functionality matches your expectations across browsers you need to support.  For instance the Flash Canvas home page advertises 70% support of HTML5 Canvas spec tests.

    Read the article

  • MSBuild (.NET 4.0) access problems

    - by JMP
    I'm using Cruise Control .NET as my build server (Windows 2008 Server). Yesterday I upgraded my ASP.NET MVC project from VS 2008/.NET 3.5 to VS 2010/.NET 4.0. The only change I made to my ccnet.config's MSBuild task was the location of MSBuild.exe. Ever since I made that change, the build has been broken with the error: MSB4019 - The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. This file does, in fact, exist in the location specified (I solved a problem similar to this when setting up the build server for VS2008/.NET 3.5 by copying the files from my dev environment to my build environment). So I RDP'ed into the build machine and opened a command prompt, used MSBUILD to attempt to build my project. MSBUILD returns the error: MSB3021 - Unable to copy file "obj\debug....dll". Access to the path 'bin....dll' is denied. Since I'm running MSBUILD from the command prompt, logged in with an account that has administrative privileges, I'm assuming that MSBUILD is running with the same privileges that I have. Next, I tried to copy the file that MSBUILD was attempting to copy. In this case, I get the UAC dialog that makes me click the [Continue] button to complete the copy. I'd like to avoid installing Visual Studio 2010 on my build machine, can anyone suggest other fixes I might try?

    Read the article

  • 550 5.7.1 our Bulk Email Senders Guidelines

    - by darkandcold
    Hello, I moved to a new server with integrated merak mail. since I forgot merak(used before), i missed to configure security options. So before I realize the issue, my smtp server was used by spammers for 2 days, hundreds times I configured security settings as what it should be (at least i hope) then i realize, i can't send e-mail to gmail or hotmail. gmail says "550 5.7.1 see our Bulk Email Senders Guidelines" yes, I saw its guidelines. first, I creat SPF records via msn senderID wizard. it completed very well. then as gmail says, I created DKIM dns records. (in Merak mail, there is option to create DKIM, i created it and save it to DNS as TXT record) it's been 2 days after doing the fixes above. but still i can't send mail gmail or hotmail. (P.S, some of my domain in merak is forwarded to my gmail account i mean when a mail arrives to [email protected] it will forward to my gmail. and the surprise it, merak mail can forward them very vell :S but can't still send mail from outlook etc.)

    Read the article

  • IE Display Bug, jQuery bug

    - by nute
    So I built some complex ajaxy jquery module on my homepage, with the help of "scrollable" from flowplayer.org. It works fine for me on Chrome, Opera, Firefox ... but of course IE is not playing friendly (regardless of the version, from my testing). Objects are not displaying exactly where they should, some are overlaying each other, and when a click a button some divs just disappear. However, if I resize the IE browser window up and down, the display mostly fixes itself. Then if I click on one of the buttons I made, it messes it up again. Until I resize the window again and it looks fine. To see the problem: Go to makemeheal.com Visit a couple of product pages (you need a product browsing history to see the module) Go to: http://www.makemeheal.com/mmh/home.do?forceshowIE=1 Look at the "Your Recent History" module. (note the forceshowIE=1, because by default I hide it for IE people) I was thinking maybe there is a way to force IE to redraw the entire module sometimes? Or maybe someone has a better idea on how to fix the underlying problem? Source code is available here: http://www.makemeheal.com/mmh/scripts/recentHistory.js http://www.makemeheal.com/mmh/styles/recentHistory.css Thanks

    Read the article

  • CGContextDrawPDFPage taking up large amounts of memory

    - by Ed Marty
    I have a PDF file that I want to draw in outline form. I want to draw the first several pages on the document each in their own UIImage to use on a button so that when clicked, the main display will navigate to the clicked page. However, CGContextDrawPDFPage seems to be using copious amounts of memory when attempting to draw the page. Even though the image is only supposed to be around 100px tall, the application crashes while drawing one page in particular, which according to Instruments, allocates about 13 MB of memory just for the one page. Here's the code for drawing: //Note: This is always called in a background thread, but the autorelease pool is setup elsewhere + (void) drawPage:(CGPDFPageRef)m_page inRect:(CGRect)rect inContext:(CGContextRef) g { CGPDFBox box = kCGPDFMediaBox; CGAffineTransform t = CGPDFPageGetDrawingTransform(m_page, box, rect, 0,YES); CGRect pageRect = CGPDFPageGetBoxRect(m_page, box); //Start the drawing CGContextSaveGState(g); //Clip to our bounding box CGContextClipToRect(g, pageRect); //Now we have to flip the origin to top-left instead of bottom left //First: flip y-axix CGContextScaleCTM(g, 1, -1); //Second: move origin CGContextTranslateCTM(g, 0, -rect.size.height); //Now apply the transform to draw the page within the rect CGContextConcatCTM(g, t); //Finally, draw the page //The important bit. Commenting out the following line "fixes" the crashing issue. CGContextDrawPDFPage(g, m_page); CGContextRestoreGState(g); } Is there a better way to draw this image that doesn't take up huge amounts of memory?

    Read the article

  • 'click' observer in Prototype

    - by Tom
    I have a page which contains several divs (each with a unique ID and the same class, 'parent'). Underneath each "parent" div, is a div with class "child" and unique ID name -child. This DIV is upon page load empty. Whenever you click on a parent DIV, the following code is executed. $$('div.parent').each(function(s){ $(s).observe('click', function(event){ event.stop(); var filer = $(s).readAttribute('filer'); var currentElement = $(s).id; var childElement = currentElement + '-children'; new Ajax.Updater ({success: childElement}, root + '/filers/interfacechildren', { parameters: {parentId: currentElement, filer: filer} }); }); }); Of course, it's possible that a child node is again a parent ont its own. The response looks like this (Smarty with Zend Framework): {foreach from=$ifaces item=interface} <div id="{$interface->name}" filer="{$interface->system_id}" class="parent">{$interface->name}</div> <div id="{$interface->name}-children" class="child"></div> {/foreach} Whenever I click on a "parent" div that is loaded inside a child, nothing happens :( Any suggestions / fixes how to fix this?

    Read the article

  • Setting up an efficient and effective development process

    - by christopher-mccann
    I am in the midst of setting up the development environment (PHP/MySQL) for my start-up. We use three sets of servers: LIVE - the servers which provide the actual application TEST - providing a testing version before it is actually released DEV - the development servers The development servers run SVN with each developer checking out their local copy. At the end of each day completed fixes are checked in and then we use Hudson to automate our build process and then transfer it over to TEST. We then check the application still functions correctly using a tester and then if everything is fine move it to LIVE. I am happy with this process but I do have two questions: How would you recommend we do local testing - as each developer adds new pages or changes functionality I want them to be able to test what they are doing. Would you just setup local Apache and a local database and have them test locally on their own machine? How would you recommend dealing with data layer changes? Is there anything else you would recommend doing to really make our development process as easy and efficient as possible? Thanks in advance

    Read the article

  • Reverse engineering and patching a DirectX game?

    - by yodaj007
    Background I am playing Imperishable Night, one of the Touhou series of games. The shoot button is 'z', moving slower is 'shift', and the arrow keys move. Unfortunately for me, using shift-z ghosts my right arrow key, so I can't move to the right while shooting. This ghosting happens in all applications, and switching keyboards fixes it. Goal I want to locate in the disassembled code the directx function that gets the keyboard input and compares it against the 'z' key, and change that key to 'a'. I'm considering this a fun project. Assuming the size of the scan codes are the same, this should be fairly simple. And because the executable is only 400k, maybe this will provide a unique opportunity for me to explore the dark side of the computing underworld (kidding). Relevant experience I have some experience with coding in assembly, but not in the disassembly of such. I have no experience with the DirectX apis. Question I need some guidance. I've found a listing of directx keyboard scan codes, and a program called PEExplorer that looks like it will do what I need. Is there a means by which I can turn some of the assembly with C function calls so it's more easily read? I will need to locate where the game retrieves the currently pressed keys, compares those against a list, and it's that list I need to modify. Any input would be greatly appreciated.

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • CodeIgniter/PHP - Calling a view from within a view

    - by Jack W-H
    Hi Folks Basically for my webapp I'm trying to organise it a bit better. As it at the moment, every time I want to load a page, I have to do it from my controller like so: $this->load->view('subviews/template/headerview'); $this->load->view('subviews/template/menuview'); $this->load->view('The-View-I-Want-To-Load'); $this->load->view('subviews/template/sidebar'); $this->load->view('subviews/template/footerview'); As you can tell it's not really very efficient. So I thought I'd create one 'master' view - It's called template.php. This is the contents of the template view: <?php $view = $data['view']; $this->load->view('subviews/template/headerview'); $this->load->view('subviews/template/menuview'); $this->load->view($view); $this->load->view('subviews/template/sidebar'); $this->load->view('subviews/template/footerview'); ?> And then I thought I'd be able to call it from a controller like this: $data['view'] = 'homecontent'; $this->load->view('template',$data); Unfortunately I simply cannot make this work. Does anyone have any ways around this or fixes I can put into place? I've tried putting ""s and ''s around $view in template.php but that makes no difference. The usual error is "Undefined variable: data" or "Cannot load view: $view.php" etc. Thanks folks! Jack

    Read the article

  • What components and IDE add-ins do you install with Delphi?

    - by Mick
    After a clean install of Delphi, what components and IDE add-ins do you make certain that you install? What's your Delphi "rig"? Here's what I install after a clean installation: Delphi 2007 JCL / JVCL - JEDI Code Library and JEDI Visual Code Library (600+ components) JWA / JWSCL - JEDI API Library & Security Code Library GExperts - GExperts is a free set of tools built to increase the productivity of Delphi and C++Builder programmers by adding several features to the IDE. TWM's experimental GExperts code formatter - adds code formatting capabilities to Delphi Virtual TreeView - Virtual Treeview is a treeview control built from ground up. More than 5 years of development made it one of the most flexible and advanced tree controls available today. MustangPeak Components (EasyList View, Virtual ShellTools, etc) - EasyListview is a control that has no dependance on the Microsoft Listview control but has all the features of the latest version from Microsoft. Also includes 'Explorer.exe' like shell components. Synapse lightweight networking components - contains simple low level non-visual objects for easy programming without problems. (no required multi-threaded synchronization, no need for windows message processing,…) Great for command line utilities, visual projects, NT services EurekaLog - EurekaLog is a complete bug resolution tool for Delphi and C++Builder developers that gives your application the power to catch every exception and memory leak, directly on the end user PC, generating a detailed log of the call stack (with file, class, method and line number), optionally sending you a copy of each log entry via email or to a web bug-tracker. DelphiSpeedUp - DelphiSpeedUp is an IDE plugin for Delphi and C++Builder. It improves the IDE’s startup speed and increases the general speed of the whole IDE. DDevExtensions - DDevExtensions extends the Delphi/C++Builder IDE by adding some new productivity features. IDE Fix Pack - The IDE Fix Pack installs is a DLL-Expert that fixes the following RAD Studio 2007 bugs at runtime. All changes are done in memory. No file on disk is modified. TPerlRegex - Regular Expression library for Delphi How about other Delphi developers?

    Read the article

  • What tools/techniques can benefit a solo developer?

    - by Michael Runyon
    Hello, I am a solo developer, working in a very small web development firm. There is occasional support for development from contractors, but for the most part, if code is written in the office, I am writing it. Many of the articles and such on here talk extensively about the tools and techniques used for collaboration of developers in teams, but that is a non-issue with me, as there is no one to (technically) collaborate with. Yet, I feel that in many ways, there are things that could be adjusted for efficiency. Our small office doesn't participate in many of the meetings/team constructions that most of the techniques rely heavily upon....we mostly just walk around and talk to one another when something is needed. This works great for all of the Just-In-Time, 15 minute fix stuff that probably populates 50% of my day, but I am also constantly working on major projects that require my total concentration, and between a flurry of tiny fixes, and being the primary admin on the 4-6 servers that we own, I find it almost impossible to get real, heavy lifting done. What suggestions can you some of you offer to help me/us become more productive/efficient, without adopting all of the corporate/teamwork practices that we are trying desperately to avoid? At what cost for efficiency is our total relaxation?

    Read the article

  • Handling Digital ID/Signature in Outlook Add-in

    - by CoSteve
    I have a C# Outlook Add-In application (VS2005 and 2003 Outlook) that reads incoming emails and strips out the attachments and the email text body for future processing. Occasionally I'll get an email that contains a digital signature. The application will fail when I try to access the mailitem.body property, throwing the following exception: System.Runtime.InteropServices.COMException (0xAB404001): The operation failed. at Microsoft.Office.Interop.Outlook._MailItem.get_Body() at MyLib.MyApp.OutlookAddin.MailProcessor.ProcessMailItem(MailItem mailItem) I'm pretty sure it is the digital signature causing the problem because if I forward the email back to myself, it will strip off the original sender's digital signature and the add-in application will process the email without any problems. I'm not sure what to do. I need to process the email, so I can't just ignore it. Somehow getting the body of the original email without throwing an exception would be ideal. Or I guess if I can identify that there is a digital signature associated with the email, I could forward the email to myself, but that seems a little messy. Does anyone have any suggestions/fixes? Thanks for any help.

    Read the article

  • jQuery 1.4 Dynamically created images aspect ratio incorrect in IE8 and max-width

    - by Chris
    After upgrading to jQuery 1.4, when I try to add an image to a page dynamically using jQuery 1.4 in IE8, the image will lose the correct aspect ratio. This appears to affect only IE8 (IE7 and Firefox work fine) with jQuery 1.4 (1.3.2 works fine). Including the jQuery compatability file does not fix the problem. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" language="javascript" type="text/javascript"></script> <!-- Switching to 1.3.2 fixes the problem --> <!--<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" language="javascript" type="text/javascript"></script>--> <script type="text/javascript"> $(document).ready(function() { var dynImg = $('<img></img>').attr('src', 'http://www.google.com/intl/en_ALL/images/logo.gif'); $('body').append(dynImg); }); </script> <style type="text/css"> img { max-width: 5em; } </style> </head> <body></body></html>

    Read the article

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • Character encoding issues when generating MD5 hash cross-platform

    - by rogueprocess
    This is a general question about character encoding when using MD5 libraries in various languages. My concern is: suppose I generate an MD5 hash using a native Python string object, like this: message = "hello world" m = md5() m.update(message) Then I take a hex version of that MD5 hash using: m.hexdigest() and send the message & MD5 hash via a network, let's say, a JMS message or a HTTP request. Now I get this message in a Java program in the form of a native Java string, along with the checksum. Then I generate an MD5 hash using Java, like this (using the Commons Codec library): String md5 = org.apache.commons.codec.digest.DigestUtils.DigestUtils.md5Hex(s) My feeling is that this is wrong because I have not specified character encodng at either end. So the original hash will be based on the bytes of the Python version of the string; the Java one will be based on the bytes of the Java version of the string , these two byte sequences will often not be the same - is that right? So really I need to specify "UTF-8" or whatever at both ends right? (I am actually getting an intermittent error in my code where the MD5 checksum fails, and I suspect this is the reason - but because it's intermittent, it's difficult to say if changing this fixes it or not. ) Thank you!

    Read the article

  • ifconfig networking telnet

    - by jhon
    Hi guys, I'm newbie around networking, I have a question: what I want is to telnet a specific IP/server, let us say 192.168.128.1 then, I try $telnet 192.168.128.1 Trying... and that's all.. I never get connected one of my friends made some script that "fixes" it, AFTER running it I was able to connect to the server using $telnet 192.168.128.1 $ user: unfortunately I lost that script, so I'm here requesting your help. Reading my old notes, I remember that the script performed some modification to the entries listed by ifconfig -a, I also have the ifconfig's output (copy & paste) $ ifconfig -a adapter0: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.128.150 netmask 0xffffff00 broadcast 192.168.128.255 tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0 adapter1: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.251.150 netmask 0xffffff00 broadcast 192.168.251.255 tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0 adapter2: flags=5e080863,c0<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),PSEG,LARGESEND,CHAIN> inet 192.168.250.150 netmask 0xffffff00 broadcast 192.168.250.255 tcp_sendspace 131072 tcp_recvspace 65536 rfc1323 0 lo0: flags=e08084b<UP,BROADCAST,LOOPBACK,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT> inet 127.0.0.1 netmask 0xff000000 broadcast 127.255.255.255 inet6 ::1/0 tcp_sendspace 131072 tcp_recvspace 131072 rfc1323 1 more than commands, I'm looking for some explanation why does "adding/changing" those entries enables me to connect to the server. I do not see the server ip (i.e 192.168.128.1) listed above. thanks

    Read the article

  • Looking for Antlr Grammar syntaxt highlight in VS2010

    - by David Mårtensson
    I am looking for some way to edit antlr grammar files directly within VS2010 with syntax highlight. I have used antlrworks a lot but it has the drawback that I have to start antlrworks separately and then browse to the file I want to edit, do the changed and save. For minor fixes I do not need all the tools in Antlrworks but I still would like the syntax highlight. But I have not been able to get VS2010 to open antlrworks with the right file and I have found no other way to get syntax highlight directly within VS2010 editor, it just opens as plain text. I can get visual studio to open antlrworks but it will open with only the last set of files it had open, not the one I clicked on. So my question(s) are: Is there a way to get antlrworks to open with the right file when I double click in it in visual studio project explorer? Is there any other way to get correct syntax highlight for antlr grammar files within visual studio (or with another editor, preferably not one that costs money, but if there are no free ones a commercial one might be an option).

    Read the article

  • Workaround for GNU Make 3.80 eval bug

    - by bengineerd
    I'm trying to create a generic build template for my Makefiles, kind of like they discuss in the eval documentation. I've run into a known bug with GNU Make 3.80. When $(eval) evaluates a line that is over 193 characters, Make crashes with a "Virtual Memory Exhausted" error. The code I have that causes the issue looks like this. SRC_DIR = ./src/ PROG_NAME = test define PROGRAM_template $(1)_SRC_DIR = $$(SRC_DIR)$(1)/ $(1)_SRC_FILES = $$(wildcard $$($(1)_SRC_DIR)*.c) $(1)_OBJ_FILES = $$($(1)_SRC_FILES):.c=.o) $$($(1)_OBJ_FILES) : $$($(1)_SRC_FILES) # This is the problem line endef $(eval $(call PROGRAM_template,$(PROG_NAME))) When I run this Makefile, I get gmake: *** virtual memory exhausted. Stop. The expected output is that all .c files in ./src/test/ get compiled into .o files (via an implicit rule). The problem is that $$($(1)_SRC_FILES) and $$($(1)_OBJ_FILES) are together over 193 characters long (if there are enough source files). I have tried running the make file on a directory where there is only 2 .c files, and it works fine. It's only when there are many .c files in the SRC directory that I get the error. I know that GNU Make 3.81 fixes this bug. Unfortunately I do not have the authority or ability to install the newer version on the system I'm working on. I'm stuck with 3.80. So, is there some workaround? Maybe split $$($(1)_SRC_FILES) up and declare each dependency individually within the eval?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >