Search Results

Search found 19115 results on 765 pages for 'region specific'.

Page 382/765 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • Google Sky Map Turns Your Android Phone into a Digital Telescope

    - by ETC
    Whether you’re an astronomy buff or just somebody looking for a perfect “look how sweet my smartphone is!’ application, Google’s Sky Map application for Android phones is a must have app. If all the application did was show you detailed views of the night sky it would be pretty awesome based on that alone. Where Sky Map dazzles, however, is in linking together the GPS and tilt-sensors on your phone to turn your phone into a sky-watching window. Whatever you point the phone at, the screen displays. Want to see what stars are directly above you despite it being the middle of the day? Point the phone up. Curious what people on the opposite side of the word are seeing? Point the phone down and take a peek right through the Earth. Check out the video below to see the application in action: Google Sky Map is free and works wherever Android does. Google Sky Map [AppBrain] Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • Oracle Managed Cloud Services - gain more from your Oracle investments

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Oracle Managed Cloud Services delivers enterprise-grade, end-to-end managed cloud services across Oracle's portfolio of business applications, middleware, database, and hardware technologies. Organizations can deploy solutions according to their own specific needs and budget, and decide where applications are hosted. Oracle can manage applications at customer sites, through Oracle's partners, or at one of Oracle’s data centers. Organizations can also choose a hybrid model for different elements of the IT environment, and move back and forth over time as strategy or requirements change. Options include Oracle Applications on demand. Leverage any Oracle application, hosted and managed by Oracle. Oracle Technology on demand. A set of end-to-end managed services for Oracle Engineered Systems and Oracle’s technology platform, including infrastructure (servers and storage), database, virtualization, operating system, and middleware. In addition, with Oracle managed cloud services, your systems and data are secure and protected at every layer. Managed Cloud Services has extensive global expertise, best practice security and regulatory compliance , and standard operating processes that will ensure your data and business critical information is safe. Oracle Managed Cloud Services helps you leverage Oracle’s years of experience so you can better focus and direct your resources. Let Oracle Cloud Services build and manage your cloud for you while you focus on driving your business forward. Learn more at: www.Oracle.com/managedcloudservices

    Read the article

  • Want an iPad? How-To Geek is Giving One Away!

    - by The Geek
    That’s right. All you have to do to enter is become a fan of our Facebook page, and we’ll pick a random fan to win the prize. Once we’ve got 10,000 fans, we’ll change the prize from an iPod Touch to an iPad 16GB (typo in the image above). Everybody who is already a fan is already automatically entered in the contest!  (there’s no country restriction). So to make sure we upgrade the prize, make sure to share the How-To Geek Facebook Fan page with your friends that might be interested (don’t mindlessly spam everybody, of course). We’ve already got the iPad sitting here. Win an iPad on the How-To Geek Facebook Fan Page Similar Articles Productive Geek Tips Why Wait? Amazing New Add-on Turns Your iPhone into an iPad! [Comic]Win a Free iPod Touch in the How-To Geek Facebook Giveaway!Geek Software: Use DeliCount to Get Site-wide del.icio.us Bookmark CountsFix for Problems with How-To Geek Sidebar GadgetSet Gmail as Default Mail Client in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Will it Blend? iPad Edition Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic

    Read the article

  • How to advocate Stack Overflow at work

    - by Gordon
    I am thinking of doing a short presentation at work about using Stack Overflow as a resource for your day job. What is the experience doing this? Would you deem it a valid resource to tell you colleagues about or is it similar to telling them about Google as a resource? Is there a better way of doing it? I was leaning toward asking questions side of Stack Overflow rather than answering them to avoid you-shouldn't-be-doing-this-on-work-time argument. Just as a follow up. Originally I didn't want to make the question to specific to my own case. My presentation will only be a quick four minute talk, which I will repeat over an hour to different groups. I may ask a question before the talk on Stack Overflow and refer to it during the presentation. Hopefully I will get some activity during the hour. I am also going to talk briefly about some of the other Stack Exchange sites that would fit the audience as they are not all developers. I think Super User, Server Fault and Programmers should work well. I will not be doing the presentation for another couple of months as it has be rescheduled, but I will update on how I got on.

    Read the article

  • How to get search engines to properly index an ajax driven search page

    - by Redtopia
    I have an ajax-driven search page that will allow users to search through a large collection of records. Each search result points to index.php?id=xyz (where xyz is the id of the record). The initial view does not have any records listed, and there is no interface that allows you to browse through all records. You can only conduct a search. How do I build the page so that spiders can crawl each record? Or is there another way (outside of this specific search page) that will allow me to point spiders to a list of all records. FYI, the collection is rather large, so dumping links to every record in a single request is not a workable solution. Outputting the records must be done in multiple requests. Each record can be viewed via a single page (eg "record.php?id=xyz"). I would like all the records indexed without anything indexed from the sitemap that shows where the records exist, for example: <a href="/result.php?id=record1">Record 1</a> <a href="/result.php?id=record2">Record 2</a> <a href="/result.php?id=record3">Record 3</a> <a href="/seo.php?page=2">next</a> Assuming this is the correct approach, I have these questions: How would the search engines find the crawl page? Is it possible to prevent the search engines from indexing the words "Record 1", etc. and "next"? Can I output only the links? Or maybe something like:  

    Read the article

  • Could not retrieve backup settings for primary ID in Log shipping

    - by user1723139
    I am doing log shipping between two Amazon EC2 instances running Windows Server 2008 R2 with SQL Server 2008 R2 standard edition. Both the instances are in the same domain and I can access the shared folders between the instances. The SQL server service account, agent service account are all running under a domain account. When I activate log shipping (with stand by mode restore in secondary server), the initial backup gets restored on the secondary. After that the backup operation is getting failed and i get the following error message: *** Error: Could not retrieve backup settings for primary ID 'xxxxxx-xxxx-xxxx-xxxx-4d772cd7337e'.(Microsoft.SqlServer.Management.LogShipping) *** *** Error: Failed to connect to server IP-0A7653F2.(Microsoft.SqlServer.ConnectionInfo) *** ****** Error: A network-related or instance-specific error occurred while establishing a connection to SQL Server.******** **The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)(.Net SqlClient Data Provider) *** **----- END OF TRANSACTION LOG BACKUP -----**** Any ideas?

    Read the article

  • SQL Bits X – Temporal Snapshot Fact Table Session Slide & Demos

    - by Davide Mauri
    Already 10 days has passed since SQL Bits X in London. I really enjoyed it! Those kind of events are great not only for the content but also to meet friends that – due to distance – is not possible to meet every day. Friends from PASS, SQL CAT, Microsoft, MVP and so on all in one place, drinking beers, whisky and having fun. A perfect mixture for a great learning and sharing experience! I’ve also enjoyed a lot delivering my session on Temporal Snapshot Fact Tables. Given that the subject is very specific I was not expecting a lot of attendees….but I was totally wrong! It seems that the problem of handling daily snapshot of data is more common than what I expected. I’ve also already had feedback from several attendees that applied the explained technique to their existing solution with success. This is just what a speaker in such conference wish to hear! :) If you want to take a look at the slides and the demos, you can find them on SkyDrive: https://skydrive.live.com/redir.aspx?cid=377ea1391487af21&resid=377EA1391487AF21!1151&parid=root The demo is available both for SQL Sever 2008 and for SQL Server 2012. With this last version, you can also simplify the ETL process using the new LEAD analytic function. (This is not done in the demo, I’ve left this option as a little exercise for you :) )

    Read the article

  • Internet (Flash) video without a PC

    - by Rob Allen
    I am looking to retire my HTPC. So much of what we do with it can be done with one of our video game consoles or an AppleTV that is seems like a waste of space, power and time to maintain. The trouble is that my wife does streaming yoga classes served up via specific websites. I am assuming they are Flash based and so far I have been unable to find Apps for these content providers. My question is, is there a GOOD way to handle flash-based or even HTML5/h.264 web content with one of the other Internet enabled devices in our stack? So far we have: Nintendo Wii Playstation 3 XBox 360 And we're looking to purchase a current generation AppleTV. update The sites are YogisAnonymmous.com and YogaJournal.com, both are confirmed as Flash.

    Read the article

  • Firefox: This connection is untrusted + Behind corporate firewall

    - by espais
    I've seen some similar issues strewn throughout Google's results about this, but none seem to be corporate-specific. I continually get the 'This connection is untrusted' screen every time I attempt to log into a secure site...for instance Gmail. This is pretty annoying as sometimes I have to go through the process of adding the exception two or three times before it finally lets me into Gmail. I am behind a corporate firewall, going through an internal proxy server to get to the Internet, so there is no possibility for me to update the firewall...etc. Does anybody know a way around this? Can it simply be disabled (and is that safe)? EDIT I'm going to reopen this question with a bit of new information. I have been using Google Chrome lately until today, and one thing that I noticed was that I never had this issue when using either Chrome or Internet Explorer. Is there something that these other browsers do that I need to manually do in FF?

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • How can I selectively increase latency? E.g. throttle games

    - by Arcymag
    Basically, I want networked games to run poorly on a network, but I want everything else to run smoothly. I would also appreciate advice on blocking games in general. As far as I can tell, there's a few ways to completely prevent an internet game from running: Blocking entirely via DNS configuration (e.g. hosts file), or router DNS configuration Blocking entirely via a separate DNS server Blocking the application, by uninstalling or some kind of access control Blocking the application by automatically killing the process every once in a while Blocking the application by corrupting files periodically However, I would like a more subtle way to block a program. Something that either: Increases latency (would this be doable through some kind of QoS like what DD-WRT offers?) Increases latency by using a special routing configuration for specific target IPs Throttle other systems resources, such as memory, IO, or CPU Screw around with keyboard configurations when a game is launched I would like this to work on MacOSX and Windows, but Linux would be great too. FYI I don't have a kid, but I was brainstorming with some friends and parents.

    Read the article

  • What are benefit/drawbacks of classifying defects during a peer code review

    - by DXM
    About 3 months ago, our engineering group rolled out Review Board to be used for all peer code reviews. Today, I had a discussion with one of the people involved in that process and found out that we are already looking for a replacement (possibly something commercial) because of several missing features. One of the features that is apparently asked by many people is the ability to classify/categorize each code review comment (i.e. is it a style issue, coding convention, resource leak, logic error, crash... whatever). For those teams that regularly practice code review, is this categorization a common practice? Do you do it? have you done it in the past? Is it good/bad? On one hand, it gives the team some more metrics and possibly will indicate more specific areas where developers may potentially need to be trained in (at least that seems to be the argument). Are there other benefits? And on the other hand, and this is my concern, is that it will slow down code review process that much more. As a team lead, I've done a fairly large share of reviews, and I've always liked the ability, to highlight a chunk of code, hammer off a comment and move on as fast as possible. Although I haven't tried it personally, I have a feeling that expanding that combo box every time and scrolling/searching for the right category would feel like something is tripping you. Also if we start keeping metrics on this stuff, my other concern is that valuable code review meeting time will be spent on arguing whether something is a logic error or if it should be classified as a crash.

    Read the article

  • Calculate data transferred in a local LAN

    - by ramdaz
    How do you calculate the data flown between a computer and the gateway computer. I have a Linux router/gateway running IP Tables which routes internet traffic in a LAN. I have individual users with IP/MAC Address mapped who access Interet through the gateway computer. I would like to find out the traffic utilized by individual users. Is it possible for us to find out what kind of traffic was HTTP, SMTP, FTP etc. Is it also possible to pool the information on hourly basis, and get specific info so that I can store information in a database? I have heard of IP Accounting? Is that the right way

    Read the article

  • Amazon how does their remarkable search work?

    - by JonH
    We are working on a fairly large CRM system /knowledge management system in asp.net. The db is SQL server and is growing in size based on all the various relationships. Upper management keeps asking us to implement search much like amazon does. Right from there search you can specify to search certain objects like outdoor equipment, clothing, etc. and you can even select all. I keep mentioning to upper management that we need to define the various fields to search on. Their response is all fields...they probably look at the search and assume that it is so simple. I'm the guy who has to say hold on guys we are talking about amazon here. My question is how can amazon run a search on an "all" category. Also one of the things management here likes is the dynamic filters. For instance, searching robot brings up filters specific to a robot toy. How can I put management in check and at least come up with search functionality that works like amazon. We are using asp.net, SQL server 2008 and jquery.

    Read the article

  • How do I run AWS code on an EC2 instance?

    - by Marianna
    I just started with Amazon web services, and I have an EC2 instance. I downloaded the JAVA SDK and the Eclipse toolbox. I am able to run a sample program locally on my PC and connect to the Amazon databases, etc. My question is, what do I need to do to get this working on my EC2 instance? This may not even be specific to AWS. On Eclipse, I can just "Run as Application" and run any code. On the server side, what do I need to do? Should I ftp over my .java files? Should I export it to a jar and upload that? Do I need to install anything special to actually run it?

    Read the article

  • High-Level Application Architecture Question

    - by Jesse Bunch
    So I'm really wanting to improve how I architect the software I code. I want to focus on maintainability and clean code. As you might guess, I've been reading a lot of resources on this topic and all it's doing is making it harder for me to settle on an architecture because I can never tell if my design is the one that the more experienced programmer would've chosen. So I have these requirements: I should connect to one vendor and download form submissions from their API. We'll call them the CompanyA. I should then map those submissions to a schema fit for submitting to another vendor for integration with the email service provider. We'll call them the CompanyB. I should then submit those responses to the ESP (CompanyB) and then instruct the ESP to send that submitter an email. So basically, I'm copying data from one web service to another and then performing an action at the latter web service. I've identified a couple high-level services: The service that downloads data from CompanyA. I called this the CompanyAIntegrator. The service that submits the data to CompanyB. I called this CompanyBIntegrator. So my questions are these: Is this a good design? I've tried to separate the concerns and am planning to use the facade pattern to make the integrators interchangeable if the vendors change in the future. Are my naming conventions accurate and meaningful to you (who knows nothing specific of the project)? Now that I have these services, where should I do the work of taking output from the CompanyAIntegrator and getting it in the format for input to the CompanyBIntegrator? Is this OK to be done in main()? Do you have any general pointers on how you'd code something like this? I imagine this scenario is common to us engineers---especially those working in agencies. Thanks for any help you can give. Learning how to architect well is really mind-cluttering.

    Read the article

  • How to structure an application that combines WCF and WPF

    - by CiaranG
    I'm in the process of learning how to use WCF (Windows Communication Foundation) to allow a client/server desktop application to communicate. The application's UI will be implemented using WPF, and we will probably use SQL Server for our database. What I'm struggling with, is understanding how to structure such an application. From what I've read, there are three components of a WCF application (which in the examples I've seen have existed as separate projects): A WCF service A WCF service host A WCF service client My question then, is - should these projects solely implement the functionality of sending/receiving data from the client/server? Would it make better sense this way? Would it make sense to create a separate WPF (Windows Presentation Foundation) project to implement the UI for the application? And so, when I need to send/receive data from the client/server, I could simply invoke the operations provided in the WCF projects that I have created? For anyone who has built similar applications previously, perhaps you could explain what worked best for you in terms of structuring your application? For example, if I create a user registration page. When the user clicks the 'Register' button, the client application will need to send the data to the server. In this case, could I just invoke the methods provided in the WCF projects to send the data? Also, what data structures worked best for you when sending/receiving data? My initial thought is sending/receiving XML containing the data. Is this an option that is easy to implement? I realise that answers to this question may well be a matter of opinion - unless there are specific best practices that I'm not aware of. Thank you

    Read the article

  • Good ruby book with exercises? [closed]

    - by watabou
    I find that I learn the best with a book that has a number of exercises at the end of each chapters. A great example of this is C++ Primer Plus by Stephen Prata or Scientific Programming with Python or the Horstmann Java books. All of those books have a number of programming exercises at the end tailored to that specific chapter. I love the styles of those book and was wondering if there is anything similar for Ruby. I've extensively searched google for this and people have been suggesting different stuff like different websites like Ruby Koans and LRTHW but honestly, I've tried those and they aren't for me. I taught myself Python with the the Hard Way book and to be honest, it's not for me. Now, forgive me if I'm blunt but does anyone have a Ruby programming BOOK (i.e. not a website), that has EXERCISES in it? I do NOT want a website, unless the book is only or is freely available online by the author, similar to the Hard Way books. I would say that I'm a intermediate level programmer with only some Ruby experience but if you know of a beginner book on Ruby, that is fine too. Thanks in advance, I would really really appreciate the help.

    Read the article

  • Dual Screens with Widescreen monitors?

    - by nmuntz
    I want to build a new computer and purchase new monitor(s). At my old job I had two 20" 4:3 and I absolutely loved this setup. However, the stores in my country only seem to have widescreen monitors nowadays, and the only 4:3 LCDs i have been able to find are 17". My question is: Do widescreens suck for using them as dual monitors? Can anyone with this setup comment on their experience with having multiple widescreen monitors? Would it be better to get three 17" 4:3 LCDs instead of two widescreens? If i go with widescreens, should i go with the smallest ones i can find? Purchasing a single big widescreen monitor is not an option for me, since being able to maximize an app on a specific area of the screen is a must have for me and im not willing to use "hacky" apps for this purpose that do a crappy job. Thanks in advance for your advise.

    Read the article

  • MS Exchange -- running code against outbound email

    - by user32680
    I would like to know if using MS Exchange there is a way to run code against outbound emails. The code would need to trigger on emails sent to a specific domain, connect to a database, check for an email related to the email sent, and Carbon-copy that email to the related email. What I'm trying to do: When [email protected] gets an email, his auditor [email protected] gets CC'd. Jack is in a MSSQL DB table related to his auditor's email. Are there any samples of things like this being done?

    Read the article

  • Dapper and object validation/business rules enforcement

    - by Eugene
    This isn't really Dapper-specific, actually, as it relates to any XML-serializeable object.. but it came up when I was storing an object using Dapper. Anyways, say I have a user class. Normally, I'd do something like this: class User { public string SIN {get; private set;} public string DisplayName {get;set;} public User(string sin) { if (string.IsNullOrWhiteSpace(sin)) throw new ArgumentException("SIN must be specified"); this.SIN = sin; } } Since a SIN is required, I'd just create a constructor with a sin parameter, and make it read-only. However, with a Dapper (and probably any other ORM), I need to provide a parameterless constructor, and make all properties writeable. So now I have this: class User: IValidatableObject { public int Id { get; set; } public string SIN { get; set; } public string DisplayName { get; set; } public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) { // implementation } } This seems.. can't really pick the word, a bad smell? A) I'm allowing to change properties that should not be changed ever after an object has been created (SIN, userid) B) Now I have to implement IValidatableObject or something like that to test those properties before updating them to db. So how do you go about it ?

    Read the article

  • SharePoint 2007 Parser Error after updating master page

    - by Kelly Jones
    A few weeks ago I was updating the master page for a SharePoint 2007 (WSS) site.  The client wanted the site updated to reflect the new look and feel that is being applied to another set of sites in the organization. I created a new theme and master page, which I already wrote about here and here.  It worked well, except for a few pages on a subsite.  On those pages, I got the following error: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Code blocks are not allowed in this file.   I decided to go comb through my new master page and compare it to the existing master page that was already working.  After going through them line by line several times, I had no clue what would be causing the error because they were basically the same! It turns out, it was a combination of two things.  First, on a few of the pages in the site, there was some include code (basically an <% EVAL()%> snippet).  This was the code that was triggering my error “Code blocks are not allowed in this file”. However, this code was working fine with the previous master page. I decided to then try doing a full deployment of the site with the new master page, and it worked fine!  Apparently, if the master page is deployed using a Feature, then it is granted permission to allow code blocks, but if you upload pages either using web UI or SharePoint Designer, then the pages won’t be able to use code blocks. I haven’t been able to pin down the rules or official info about this, but I thought others might find it useful anyway.

    Read the article

  • Win 7 BSOD stucks at dumping

    - by AFO
    Recently Ive been getting frequent bsods, which always gets stuck at dumping without finishing it, with most of them showing no error messages. This never happened before i upgraded to new rams, but 2 passes of memtest86 turned out fine. Tried reinstalling windows, problem still there. Tried initiating a manual crash, and that did create a successful dump. The SSD win 7 is on doesn't seem to have any problems, crystaldisk says its healthy. Varied the cpu multiplier between stock and +500Mhz, crashed regardless. Left voltage control to auto. Im fairly sure its a hardware problem, just cant pinpoint which specific part(s). The specs- Windows 7 x64 (1 day old) 955 x4 BE c3 (running at stock) (3.5 years old) GA-970A-D3 (1.5 years old) Gigabyte 6950, unlocked to 6970 (still at 6950 speeds) (<3 years old) 2x4GB 1600 CL9 HyperX Blu (running at 11-11-11, default motherboard setting) (<1 month old) Plextor M5s (around 5 months)

    Read the article

  • PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012

    - by mseika
    PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012Dear partner Oracle is pleased to invite you to the following webcasts dedicated to our EMEA partner community and designed to provide you with important news on our SPARC and Storage product portfolios. Please ensure you don't miss these unique learning opportunities! 1. How to Make Money Selling SPARC!3PM CET (2pm UKT), Tuesday, July 10, 2012 The webcast will be hosted by - Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant. Agenda: To bring our partners timely, valuable information, focused on increase in their success during selling SPARC systems. The webcast will be focused and targeted on specific topics and will last approximately in 30 minutes.You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW 2. Introduction to Oracle’s New StorageTek SL150 Modular Tape Library3pm CET (2pm UK), Thursday, July 12, 2012 This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, theSL150 Modular Tape Library.This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Note: Please join the call 10 minutes before the scheduled start time. We look forward to your participation. Best regards, Giuseppe FacchettiEMEA Partner Business Development Manager, Oracle Hardware Sales Sasan MoaveniEMEA Storage Sales Manager, Oracle Hardware Sales

    Read the article

  • PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012

    - by mseika
    PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012Dear partner Oracle is pleased to invite you to the following webcasts dedicated to our EMEA partner community and designed to provide you with important news on our SPARC and Storage product portfolios. Please ensure you don't miss these unique learning opportunities! 1. How to Make Money Selling SPARC!3PM CET (2pm UKT), Tuesday, July 10, 2012 The webcast will be hosted by - Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant. Agenda: To bring our partners timely, valuable information, focused on increase in their success during selling SPARC systems. The webcast will be focused and targeted on specific topics and will last approximately in 30 minutes.You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW 2. Introduction to Oracle’s New StorageTek SL150 Modular Tape Library3pm CET (2pm UK), Thursday, July 12, 2012 This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, theSL150 Modular Tape Library.This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Note: Please join the call 10 minutes before the scheduled start time. We look forward to your participation. Best regards, Giuseppe FacchettiEMEA Partner Business Development Manager, Oracle Hardware Sales Sasan MoaveniEMEA Storage Sales Manager, Oracle Hardware Sales

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >