Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 594/924 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • Choosing a low cost wildcard SSL cert (PsotiveSSL, RapidSSL, or other)?

    - by Malcolm
    I'm looking to put in place a wildcard SSL certificate for a server that will be providing REST style web services to multiple subdomains. We use NameCheap.com for our DNS services and they offer a choice of 2 very competitively priced wildcard certs: PositiveSSL Wildcard $129.99/yr RapidSSL Wildcard $148.88/yr Is there any reason to choose one of these branded certs over the other? Or are there problems with these low cost certs that we should aware of? If so, what SSL vendor/products do you recommend and why do you recommend them? Thank you, Malcolm

    Read the article

  • How do I make an exe into a service on Windows?

    - by user3677994
    I recently made an application that has multiple parts. One of the parts is a networking tool - it always starts with the OS, and it never displays any sort of message. It does, however, start an incredibly irritating console, which is impossible to get rid of without closing the program itself (please just accept this one as given). I have decided to work around this problem by starting the program (it's a *.exe) as a service, thus stopping it from showing up at all. As the application will be distributed to various computers (hence the need for a networking tool in the first place). I need a way to make this program install as a service (so, I don't really want answers that tell me to go through a series of menus on the Control Panel or download a 3rd-party application that has to stay on whichever computer the service will run on). How can I do this?

    Read the article

  • Find Content Related Images in Internet Explorer 8

    - by Asian Angel
    Do you want an easy way to find images related to news stories or articles when browsing in Internet Explorer? Then you will definitely want to have a look at the Bing Image Search accelerator. Bing Image Search in Action Two simple steps will have the new accelerator installed: click Add to Internet Explorer to start the process and confirm the installation when the secondary window appears. For the first of our two examples we chose the “Deepwater Horizon rig”. To find images highlight the word/name that you are interested in, and select Bing Image Search in the context menu. Hovering your mouse over the context menu listing will present a “top” image in a popup window… Or if you prefer to view multiple images click on the context menu listing. A Bing image search for the highlighted word/name will be opened in a new tab. Our second example was the Guatemalan volcano “Pacaya”. As before we first viewed the popup window image… Followed by a full image search in a new tab. The accelerator makes it quick and easy to find additional images when needed. Conclusion The Bing Image Search accelerator makes it a simple task to find additional images related to what you are reading or looking at while browsing. Links Add the Bing Image Search accelerator to Internet Explorer 8 Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPPrint Only Selected Text From Web PagesMake Ctrl+Tab in Internet Explorer 7 Use Most Recent OrderRemove ISP Text or Corporate Branding from Internet Explorer Title BarDisable and Remove Suggested Sites From Internet Explorer 8 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Awesome World Cup Soccer Calendar Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version

    Read the article

  • What is the equivalent of 127.255.255.255 for OS/X machines so I can test broadcast udp packets without a network?

    - by JohnPristine
    I am trying to test my program that makes use of broadcast UDP (not multicast!). In Linux, I can use the 127.255.255.255:64651 address and everything works beautifully, in other words, I send a packet to 127.255.255.255:64651 and multiple clients listening on that port get the packet. A real broadcast example! Unfortunately on my OS/X machine (Mountain Lion) the same example does not work. Is there any way I can get 127.255.255.255 to work on mac machines? Any other solution to get broadcast working on my mac machine without a network? Note: It has to be broadcast, not multicast.

    Read the article

  • Postgres error messages in Apache error log

    - by Warren Pena
    I'm running Apache 2.2, PHP 5.2, and Postgres 8.2 on Windows, and I'm seeing something funky in Apache's error.log file. Occasionally, I'll see the message "row number -1 is out of range 0..-1" pop up over and over. Unlike all the other lines in that log file, there's no timestamp or log level. Just that exact string. Googling around, it appears that message is, character for character, a common Postgres error message, but is not an Apache error. I've seen it happen multiple times, and on several different servers. I can't seem to reproduce it, though. I've tried throwing all sorts of error ridden database queries and result set inquiries at Postgres via PHP, and none of them seem to trigger that line being written to the log file. Is it possible for Postgres errors to be ending up in my Apache log file, and if so, how? What would trigger an error message like that? Thanks!

    Read the article

  • Installing and maintaining an email server

    - by Andrew
    I need to move hosting providers for four or five domains and for several reasons I'm considering a Linux VPS rather than staying with my current shared, managed hosting provider. The only thing that's stopping me is email. I have lots of experience running and maintaining Apache, but none with email servers. Based on some research, if I want to keep what I've using now, it looks like I'd be going with Postfix and Dovecot, and probably Exim and SpamAssassin. I have no problem performing regular maintenance and watching for security updates, but I don't want to bite off more than I can chew. For someone new to email services, how hard is it to set up an email server that is externally accessible (via SMTP and POP3, not IMAP), available over SSL/TLS and reasonably reliable for multiple domains? How much of a time commitment is it to maintain one?

    Read the article

  • Two network interfaces and two IP addresses on the same subnet in Linux

    - by Scott Duckworth
    I recently ran into a situation where I needed two IP addresses on the same subnet assigned to one Linux host so that we could run two SSL/TLS sites. My first approach was to use IP aliasing, e.g. using eth0:0, eth0:1, etc, but our network admins have some fairly strict settings in place for security that squashed this idea: They use DHCP snooping and normally don't allow static IP addresses. Static addressing is accomplished by using static DHCP entries, so the same MAC address always gets the same IP assignment. This feature can be disabled per switchport if you ask and you have a reason for it (thankfully I have a good relationship with the network guys and this isn't hard to do). With the DHCP snooping disabled on the switchport, they had to put in a rule on the switch that said MAC address X is allowed to have IP address Y. Unfortunately this had the side effect of also saying that MAC address X is ONLY allowed to have IP address Y. IP aliasing required that MAC address X was assigned two IP addresses, so this didn't work. There may have been a way around these issues on the switch configuration, but in an attempt to preserve good relations with the network admins I tried to find another way. Having two network interfaces seemed like the next logical step. Thankfully this Linux system is a virtual machine, so I was able to easily add a second network interface (without rebooting, I might add - pretty cool). A few keystrokes later I had two network interfaces up and running and both pulled IP addresses from DHCP. But then the problem came in: the network admins could see (on the switch) the ARP entry for both interfaces, but only the first network interface that I brought up would respond to pings or any sort of TCP or UDP traffic. After lots of digging and poking, here's what I came up with. It seems to work, but it also seems to be a lot of work for something that seems like it should be simple. Any alternate ideas out there? Step 1: Enable ARP filtering on all interfaces: # sysctl -w net.ipv4.conf.all.arp_filter=1 # echo "net.ipv4.conf.all.arp_filter = 1" >> /etc/sysctl.conf From the file networking/ip-sysctl.txt in the Linux kernel docs: arp_filter - BOOLEAN 1 - Allows you to have multiple network interfaces on the same subnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of which cards (usually 1) will respond to an arp request. 0 - (default) The kernel can respond to arp requests with addresses from other interfaces. This may seem wrong but it usually makes sense, because it increases the chance of successful communication. IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load- balancing, does this behaviour cause problems. arp_filter for the interface will be enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE, it will be disabled otherwise Step 2: Implement source-based routing I basically just followed directions from http://lartc.org/howto/lartc.rpdb.multiple-links.html, although that page was written with a different goal in mind (dealing with two ISPs). Assume that the subnet is 10.0.0.0/24, the gateway is 10.0.0.1, the IP address for eth0 is 10.0.0.100, and the IP address for eth1 is 10.0.0.101. Define two new routing tables named eth0 and eth1 in /etc/iproute2/rt_tables: ... top of file omitted ... 1 eth0 2 eth1 Define the routes for these two tables: # ip route add default via 10.0.0.1 table eth0 # ip route add default via 10.0.0.1 table eth1 # ip route add 10.0.0.0/24 dev eth0 src 10.0.0.100 table eth0 # ip route add 10.0.0.0/24 dev eth1 src 10.0.0.101 table eth1 Define the rules for when to use the new routing tables: # ip rule add from 10.0.0.100 table eth0 # ip rule add from 10.0.0.101 table eth1 The main routing table was already taken care of by DHCP (and it's not even clear that its strictly necessary in this case), but it basically equates to this: # ip route add default via 10.0.0.1 dev eth0 # ip route add 130.127.48.0/23 dev eth0 src 10.0.0.100 # ip route add 130.127.48.0/23 dev eth1 src 10.0.0.101 And voila! Everything seems to work just fine. Sending pings to both IP addresses works fine. Sending pings from this system to other systems and forcing the ping to use a specific interface works fine (ping -I eth0 10.0.0.1, ping -I eth1 10.0.0.1). And most importantly, all TCP and UDP traffic to/from either IP address works as expected. So again, my question is: is there a better way to do this? This seems like a lot of work for a seemingly simple problem.

    Read the article

  • Do MORE with WebCenter

    - by Michael Snow
    We’ve been extremely busy here on the Oracle WebCenter team. We hope that you’ve all be keeping up with the interesting news each week. Last week was jammed full of GartnerPCC and Gartner360 buzz. If you missed any of the highlights – be sure to check out both Kellsey’s post from last week: Gartner PCC: A Shovel & Some Ah-Ha's and Christie’s overview of Loren Weinberg’s PCC presentation: "Here Today, Gone Tomorrow: Engage Your Customers or Lose Them"  . This week, we’ll be focusing on “Doing More with WebCenter” leading up to a great webcast scheduled for Thursday, March 22 (invite and registration link below). This is the 2nd in a series of 3 webcasts dedicated to expanding the understanding of the full capabilities of WebCenter. Yes – that might mean that you are not getting the full benefits of the software you already own or the expansion potential via upgrade to the full WebCenter Suite Plus. Tune in on Thursday 10 a.m. PT / 1 p.m. ET.  ++++++++++++++ Want to be a Speaker at Oracle OpenWorld 2012? Oracle Open World planning has already kicked off. We know that it is only March and next October is far in the distance. But planning has already started for Oracle OpenWorld 2012. So if you want to be a speaker and propose your own session for this year's event in San Francisco on September 30th - October 4th, starting thinking now!  The annual OpenWorld Call for Papers is now open until April 9th! All of the details to submit a paper are available here. Of course, the WebCenter team here is interested in sessions including case studies, thought-leadership, customer stories around any of the Oracle WebCenter solutions, but the Call for Papers is open to all Oracle topics. When submitting your topic, be sure to describe what you plan to discuss and the value of the presentation to other attendees. Sell your session, because there will be a lot of competition to be selected.  Bonus News: Speakers for selected sessions receive a complimentary full conference pass! Get your papers in and we'll see you in San Francisco! ~~~~~~~~~~~~~~~~~~~~~~ Webcast Series: Do More with Oracle WebCenter - Expand Beyond Content Management Enable Employees, Partners, and Customers to Do More with Your Content Dear [FIRSTNAME] [LASTNAME],-- Did you know that, in addition to content management, Oracle WebCenter now also includes comprehensive portal, composite application, collaboration, and Web experience management capabilities? Join us for this Webcast and learn how you can provide a new level of user engagement. Learn how Oracle WebCenter: Drives task-specific application data and content to a single screen for executing specific business processes Enables mixed internal and external environments where content can be securely shared and filtered with employees, partners, and customers, based upon role-based security Offers Web experience management, driving contextually relevant, social, and interactive online experiences across multiple channels Provides social features that enable sharing, activity feeds, collaboration, expertise location, and best-practices communities Learn how to do more with Oracle WebCenter. Register now for the Webcast. Register Now Join us for the second Webcast in the series "Do More With Oracle WebCenter". March 22, 2012 10 a.m. PT / 1 p.m. ET Presented by: Michelle Huff Senior Director, WebCenter Product Management, Oracle Greg Utecht Project Manager,IT Operations,TIES Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices | Privacy Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Events and objects being skipped in GameMaker

    - by skeletalmonkey
    Update: Turns out it's not an issue with this code (or at least not entirely). Somehow the objects I use for keylogging and player automation (basic ai that plays the game) are being 'skipped' or not loaded about half the time. These are invisible objects in a room that have basic effects such are simulating button presses, or logging them. I don't know how to better explain this problem without putting up all my code, so unless someone has heard of this issue I guess I'll be banging my head against the desk for a bit /Update I've been continuing work on modifying Spelunky, but I've run into a pretty major issue with GameMaker, which I hope is me just doing something wrong. I have the code below, which is supposed to write log files named sequentially. It's placed in a End Room event such that when a player finishes a level, it'll write all their keypress's to file. The problem is that it randomly skips files, and when it reaches about 30 logs it stops creating any new files. var file_name; file_count = 4; file_name = file_find_first("logs/*.txt", 0); while (file_name != "") { file_count += 1; file_name = file_find_next(); } file_find_close(); file = file_text_open_write("logs/log" + string(file_count) + ".txt"); for(i = 0; i < ds_list_size(keyCodes); i += 1) { file_text_write_string(file, string(ds_list_find_value(keyCodes, i))); file_text_write_string(file, " "); file_text_write_string(file, string(ds_list_find_value(keyTimes, i))); file_text_writeln(file); } file_text_close(file); My best guess is that the first counting loop is taking too long and the whole thing is getting dropped? Also, if anyone can tell me of a better way to have sequentially numbered log files that would also be great. Log files have to continue counting over multiple start/stops of the game.

    Read the article

  • Why did windows change the elevation requirements of my AutoHotKey script, and how can I prevent such in the future?

    - by monsto
    I was working on an AutoHotKey (AHK) script to create prefab mouse movements for a very simple model viewer. I worked on it for a good hour. I zipped the script, posted it to a forum, and thought "oh hey, I should add bla bla bla to the script". When I returned to the program, the AHK script would not work. I could see the mouse movements working in other programs (notepad, chrome, etc), but not where I had been working the previous hour. After several hours of throwing darts at the troubleshooting wall, I discovered that the fix was to set the AHK.exe to Run as Administrator. The question here are multiple Why did Windows 7, in all it's wisdom, suddenly decide that elevation was necessary in the middle of usage? Can these permission requirements somehow be reverted by, say, removing a key from the registry or something? How can this kind of Windows behaviour be avoided in the future?

    Read the article

  • Copy Formatting in Word

    - by Ahamad Patan
    Many a times you may need to copy the "Format" in Word. The "Copy Format" feature lets you quickly and easily "copy" all the formatting characteristics from one group of selected text to another. This is helpful when you have several headings that you want consistent formatting. Here are steps on how to Copy Formatting: 1. Select, or highlight, the item of text containing the format you wish to copy. 2. Office 2003 - Click on the Format Painter Button in the Standard Toolbar (looks like Paintbrush). Office 2007 - Format Painter Button is located on the Home tab (looks like a Paintbrush). Office 2003 - An I-beam with a small cross to the left will appear as you move your mouse. Office 2007 - An I-beam with a small paintbrush will appear as you move your mouse. 3. Select the text you wish to copy the formatting to. 4. Formatting of the selected text will automatically change. For multiple formatting changes, double-click on the Format Painter button in Step 2. Remember, you'll have to click it again to deselect it or press Esc.

    Read the article

  • Self-hosted image gallery

    - by Robert Munteanu
    I'd like to host an image gallery for family and friends. Is there any solution out here which offers: albums ( I expect most do ); ability to specificy tags/persons/places etc, similar to what f-spot does; an API to upload multiple photos or just a bulk uploader; block out unregistered users; allow users to leave comments. Update: please do assume that I'm able to host the service myself if I've asked for a self-hosted solution.

    Read the article

  • Should programmers itemize testing for projects? [on hold]

    - by Patton77
    I recently hired a programming team to do a port of my iPad app to the iPhone and Android platforms. Now, in a separate contract, I am asking them to implement a bunch of tips on how to play the app, similar like you would find in Candy Crush or Cut the Rope. They want to charge 12 hours @ $35/hr for the "Testing all of the Tips", telling me that normally it would take them more than 25 hours but that they will 'bear the difference'. I am not familiar with this level of itemization, but maybe it's a new practice? I am used to devs doing their own quality control, and then having a testing/acceptance period. They are using Cocos 2D-X, and they say that the tips going to multiple platforms makes all of the hours jack up. I feel like they might be overcharging, and it's difficult for me to know because it's kind of like with a mechanic. "It took us 5 hours to replace the radiator". How can you dispute that? It seems to me that most of you would charge for the work but NOT for hours that you are 'testing'. Am I missing something? Thanks for any help and advice you can give!

    Read the article

  • Microsoft Sync Framework

    - by kaleidoscope
    Introduction It is a platform that enables collaboration and offline access for applications, services and devices. Sync framework features technologies and tools that enable roaming, data sharing and taking data offline. Moreover, developers can build synchronization ecosystems that integrate any application with data from any store, by using any protocol over any network. Highlights * Add sync support to new and existing applications, services and devices * Enable collaboration and offline capabilities for any application * Roam and share information form any data store, over any protocol and over any network configuration * Leverage sync capabilities exposed in Microsoft technologies to create sync ecosystems * Extend the architecture to support custom data types including files Benefits of using Sync Framework * An extensible model that lets you integrate multiple data sources into a synchronization ecosystems. * A managed API for all components and a native API for select components. * Conflict handling for automatic and custom resolution schemes. * Filters that let you synchronize a subset of data, such as only those files that contain images. * A compact and efficient metadata model that enables synchronization for virtually any participant, without significant changes to the data store: - Any data store     Add synchronization to a wide range of applications, services and devices. - Any data type     Introduce new data types to synchronize. - Any protocol     Use existing architectures and protocols to synchronize data. The transport – agnostic architecture allows integration of synchronization into a variety of protocols, including over-the-air and embedded devices. - Any network configuration     Enable synchronization for your applications, devices and services in true peer-to-peer or hub-and-spoke configurations. Easily recover from network interruptions. Reduce network traffic by efficiently selecting changes to synchronize. Technorati Tags: Anish Sharma,Microsoft Sync Framework

    Read the article

  • Release 17 is here!

    - by Cheryl
    Our training development team has been busy updating courses to keep pace with the new release of CRM On Demand. Release 17 is here! And I heard recently that it's one of our biggest releases ever. A lot of new features and functionality for you to take advantage of - too much for me to cover in this blog post. But, I thought I'd tell you about a few of my favorites - be sure to take a look at the What's New in Release 17 recording to see the full list, though...because I'm only going to touch on a few. Create your own look - okay, I'm starting with the fun stuff. But, there is a new customizable themes feature so that you can change the look of the application; colors, logo, the shape of the tabs. And it's really easy. There's also a whole new library of ready-made themes for you to pick from if you just want to go with one of those. Use this new feature to match the look of your company logo and color scheme. Or blaze new trails. You can create the look for the whole company, or a different look for each CRM On Demand role. This might especially come in handy if you're using the Partner Relationship Management (PRM) capabilities of CRM On Demand - you can create themes for your partner-facing roles to provide branded partner portals. Speaking of PRM - there are enhancements in this release to help companies better manage their partner relationships. A new Deal Registration object, which is separate from the Opportunity record, and better Special Pricing Request and Marketing Development Fund Request processes, give a lot more flexibility in how companies can build and manage their relationships with partners. Some new options for Forecasts in in Release 17, too. You can now have more than one type of forecast generated each forecast period. For example, you might need to see a forecast of the total opportunity revenue for your sales team, as well as on that breaks down revenue by product. The forecast definition now lets you do that. Other options allow you to make submitting forecasts easier, split opportunity revenue across the team and forecast that split appropriately. And - look for the new Forecast subject area in Answers, for building custom forecast reports. Ever wish you could use Workflow Rules to automatically reassign leads if they haven't been followed up on...or to email a manager if the status of a service request isn't changed after a specified period of time? Then check out the new Wait action for workflows. I think you'll be happy. Ok, enough for today. There is a lot to Release 17 that I didn't mention - a lot has been added for our Life Science industry edition, some new data visibility options, a new Data Loader tool, and more. Stay tuned for more blog posts about these and other Release 17 features in the coming weeks. In the meantime, don't forget about all of the resources we have for you to learn more (see my Learning About Release 17 blog post for details).

    Read the article

  • Best practice for setting Effect parameters in XNA

    - by hichaeretaqua
    I want to ask if there is a best practice for setting Effect parameters in XNA. Or in other words, what exactly happens when I call pass.Apply(). I can imagine multiple scenarios: Each time Apply is called, all effect parameters are transferred to the GPU and therefor it has no real influence how often I set a parameter. Each time Apply is called, only the parameters that got reset are transferred. So caching Set-operations that don't actually set a new value should be avoided. Each time Apply is called, only the parameters that got changed are transferred. So caching Set-operations is useless. This whole questions is bootless because no one of the mentions ways has any noteworthy impact on game performance. So the final question: Is it useful to implement some caching of set operation like: private Matrix _world; public Matrix World { get{ return _world; } set { if (value == world) return; _effect.Parameters["xWorld"].SetValue(value); _world = value; } } Thanking you in anticipation.

    Read the article

  • What to use for simple cross-platform games instead of Flash?

    - by jmh_gr
    In short, for simple games: Is Flash still a good option for browser-based PC clients? It still has 90%+ penetration. What is a good alternative for mobile devices? It HTML5 + JavaScript the choice for mobile? Or does one have to learn a new native language for each target platform? (Android, Apple, Windows Phone)... If you desire further background: There are more blogs about the official demise of mobile Flash than I can count, along with endless useless and vitriolic comments. I'm actually trying to do something practical: build simple games that can be served accross multiple platforms. Several months ago I plopped down $1100 for CS5.5 Web and am wading into Flash. Bummer. My question to people who actually develop simple games and apps: What platform should I use instead? Is Flash still a sensible platform for web-served PC users? For example, let's say I build a simple arcade game that I would like to serve as an app to mobile users and as a browser-based game to PC users. Should I still invest the time and effort to learn and develop in Flash for the PC users, while building a parallel code set in some other language for mobile users? My games are simple enough that it would be annoying but not inconceivable to maintain parallel code sets.

    Read the article

  • Consistency of an object

    - by Stefano Borini
    I tend to keep my objects consistent during their lifetime. In some cases, setting up an object requires multiple calls to different routines. For example, a connection object may operate in this way: Connection c = new Connection(); c.setHost("http://whatever") c.setPort(8080) c.connect() please note this is just a stupid example to let you understand the point. In between calls to setHost and setPort the object is inconsistent, because the Port has not been specified yet, so this code would crash Connection c = new Connection(); c.setHost("http://whatever") c.connect() Meaning that it's a requisite for connect() to have previous calls to both setHost and setPort, otherwise it won't be able to operate as its state is inconsistent. You may fix the issue with a default value, but there may be cases where no sensible default may be devised. We assume in the later example that there's no default for the port, and therefore a call to c.connect() without first calling both setHost and setPort will be an inconsistent state of the object. This, to me, points at an incorrect interface design, but I may be wrong, so I want to hear your opinion. Do you organize your interface so that the object is always in a consistent (i.e. workable) state both before and after the call ? Edit: Please don't try to solve the problem I gave above. I know how to solve that. My question is much broader in sense. I am looking for a design principle, officially or informally stated, regarding consistency of object state between calls.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • First Day of Data Integration Track at Oracle OpenWorld 2012

    - by Irem Radzik
    OpenWorld started full speed for us today with a great set of sessions in the Data Integration track. After the exciting keynote session on Oracle Database 12c in the morning; Brad Adelberg, VP of Development for Data Integration products, presented Oracle’s data integration product strategy. His session highlighted the new requirements for data integration to achieve pervasive and continuous access to trusted data. The new requirements and product focus areas presented in this session are: Provide access to any data at any source On premise or on cloud Enable zero downtime operations and maximum performance Leverage real-time data for accurate business insights And ensure high quality data is used across the enterprise During the session Brad walked over how Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, Oracle Enterprise Data Quality, and Oracle Data Service Integrator, deliver on these requirements and how recent product releases build on this strategy. Soon after Brad’s session we heard from a panel of Oracle GoldenGate customers, St. Jude Medical, Equifax, and Bank of America, how they achieved zero downtime operations using Oracle GoldenGate. The panel presented different use cases of GoldenGate, from Active-Active replication to offloading reporting. Especially St. Jude Medical’s implementation, which involves the alert management system for patients that use their pacemakers, reminded me in some cases downtime of mission-critical systems can be a matter of life or death. It is very comforting to hear that GoldenGate delivers highly-reliable continuous availability for life-saving medical systems. In the afternoon, Nick Wagner from the Product Management team and I followed the customer panel with the review of Oracle GoldenGate 11gR2’s New Features.  Many questions we received from audience were about GoldenGate’s new Integrated Capture for Oracle Database and the enhanced Conflict Management features, as well as how GoldenGate compares to Oracle Streams. In addition to giving details on GoldenGate’s unique capability to capture changed data with a direct integration to the Oracle DBMS engine, we reminded the audience that enhancements to Oracle GoldenGate will continue, while Streams will be primarily maintained. Last but not least, Tim Garrod and Ryan Fonnett from Raymond James presented a unified real-time data integration solution using Oracle Data Integrator and GoldenGate for their operational data store (ODS). The ODS supports application services across the enterprise and providing timely data is a critical requirement. In this solution, Oracle GoldenGate does the log-based change data capture for Oracle Data Integrator’s near real-time data integration between heterogeneous systems. As Raymond James’ ODS supports mission-critical services for their advisors, the project team had to set up this integration environment to be highly available. During the session, Ryan and Tim explained how they use ODI to enable automated process execution and “always-on” integration processes. Their presentation included 2 demonstrations that focused on CDC patterns deployed with ODI and the automated multi-instance execution and monitoring. We are very grateful to Tim and Ryan for their very-well prepared presentation at OpenWorld this year. Day 2 (Tuesday) will be also a busy day in our track. In addition to the Fusion Middleware Innovation Awards ceremony at 11:45am at Moscone West 3001, we have the following DI sessions Real-World Operational Reporting Customer Panel 11:45am Moscone West- 3005 Oracle Data Integrator Product Update and Future Strategy 1:15pm Moscone West- 3005 High-volume OLTP with Oracle GoldenGate: Best Practices from Comcast 1:15pm Moscone West- 3005 Everything You need to Know about Monitoring Oracle GoldenGate 5pm Moscone West-3005 If you are at OpenWorld please join us in these sessions. For a full review of data integration track at OpenWorld please see our Focus-On document.

    Read the article

  • Synergy to share mouse only (not keyboard)

    - by Stan
    Synergy is very handy for single mouse/keyboard set sharing between multiple PCs. But I have a laptop (WinXP SP3) and a desktop (OS X Snow Leopard), a mouse, a keyboard which plugs to desktop. I don't need to share keyboard since each has one. Is there any way to share mouse only? Since if keyboard sharing enable might cause inconvenience. Thanks. PS. Another quick question is how to stop synergy daemon on Mac OS? synergys --help doesn't give any information.

    Read the article

  • How to get search engines to properly index an ajax driven search page

    - by Redtopia
    I have an ajax-driven search page that will allow users to search through a large collection of records. Each search result points to index.php?id=xyz (where xyz is the id of the record). The initial view does not have any records listed, and there is no interface that allows you to browse through all records. You can only conduct a search. How do I build the page so that spiders can crawl each record? Or is there another way (outside of this specific search page) that will allow me to point spiders to a list of all records. FYI, the collection is rather large, so dumping links to every record in a single request is not a workable solution. Outputting the records must be done in multiple requests. Each record can be viewed via a single page (eg "record.php?id=xyz"). I would like all the records indexed without anything indexed from the sitemap that shows where the records exist, for example: <a href="/result.php?id=record1">Record 1</a> <a href="/result.php?id=record2">Record 2</a> <a href="/result.php?id=record3">Record 3</a> <a href="/seo.php?page=2">next</a> Assuming this is the correct approach, I have these questions: How would the search engines find the crawl page? Is it possible to prevent the search engines from indexing the words "Record 1", etc. and "next"? Can I output only the links? Or maybe something like:  

    Read the article

  • Open source alternative for Canonical Landscape? [on hold]

    - by netvope
    From Canonical: Landscape is an easy-to-use systems management and monitoring service that enables you to manage multiple Ubuntu machines as easily as one through a simple Web-based interface. However, Landscape is not free. The RedHat counterpart Satellite has a free version called Spacewalk, but it doesn't work on Ubuntu. (There is an attempt to port Spacewalk to Debian, but it doesn't look like it's stable yet.) Are there any open source alternative to Landscape? Better yet, are there any Spacewalk-like software that works for both RedHat-based and Debian-based systems?

    Read the article

  • null pointers vs. Null Object Pattern

    - by GlenH7
    Attribution: This grew out of a related P.SE question My background is in C / C++, but I have worked a fair amount in Java and am currently coding C#. Because of my C background, checking passed and returned pointers is second-hand, but I acknowledge it biases my point of view. I recently saw mention of the Null Object Pattern where the idea is than an object is always returned. Normal case returns the expected, populated object and the error case returns empty object instead of a null pointer. The premise being that the calling function will always have some sort of object to access and therefore avoid null access memory violations. So what are the pros / cons of a null check versus using the Null Object Pattern? I can see cleaner calling code with the NOP, but I can also see where it would create hidden failures that don't otherwise get raised. I would rather have my application fail hard (aka an exception) while I'm developing it than have a silent mistake escape into the wild. Can't the Null Object Pattern have similar problems as not performing a null check? Many of the objects I have worked with hold objects or containers of their own. It seems like I would have to have a special case to guarantee all of the main object's containers had empty objects of their own. Seems like this could get ugly with multiple layers of nesting.

    Read the article

  • Win 2008 single server development environment (architecture)

    - by Tommy Jakobsen
    I have a few questions about a test development environment that I’m setting up on this server: Intel Core i7-920 Quadcode incl. Hyper Threading 8 GB DDR3 RAM 2x 750 GB SATA-II (probably software RAID 1) The server is going to support max 5 users, maybe 10 when stressed. I was hoping that I could run all the following products on the same server: Windows Server 2008 R2 x64 w/ IIS SQL Server 2008 x64 (R2 when released) Team Foundation Server 2010 Sharepoint Foundation 2010 I know this sounds overkill, but remember that this is for development purpose and testing. This is not a production environment. My question if this will be possible at all? Should I run it all on one Windows 2008 installation, or should I run it in multiple virtual environments using Hyper-V? What do you think?

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >