Search Results

Search found 17610 results on 705 pages for 'specific'.

Page 327/705 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • How to zero out a drive?

    - by Mohd Arafat Hossain
    I'm currently running Ubuntu 12.04 and want to completely format my laptop so I can install Windows 7 (replace Ubuntu). When I put in my bootable USB with the OS it just shows a black screen with a white blinking cursor on the top left corner. I wait for hours (to be specific 5 whole hours) waiting for something to happen but I get nothing, so I pull out the USB and my Ubuntu 12.04 loads up. Repeated this several times with putting different boot priority options on top-est that relate to USB but the results are same. I go and visit some sites on the net like this http://www.techspot.com/community/topics/cant-replace-ubuntu-with-windows.175716/ that say I have to zero out my hard drive. My question is how? Note: Erasing out all my data is no problem cause I have nothing important to backup. If I have to lose my primary OS (Ubuntu 12.04) in the process I am ready to as my aim is just to install Windows 7 successfully. Please don't answer that there is something wrong with my USB or my USB reader/port/hardware or the content inside the USB cause they all works fine on my brothers PC as it boots up flawlessly.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • How can I selectively increase latency? E.g. throttle games

    - by Arcymag
    Basically, I want networked games to run poorly on a network, but I want everything else to run smoothly. I would also appreciate advice on blocking games in general. As far as I can tell, there's a few ways to completely prevent an internet game from running: Blocking entirely via DNS configuration (e.g. hosts file), or router DNS configuration Blocking entirely via a separate DNS server Blocking the application, by uninstalling or some kind of access control Blocking the application by automatically killing the process every once in a while Blocking the application by corrupting files periodically However, I would like a more subtle way to block a program. Something that either: Increases latency (would this be doable through some kind of QoS like what DD-WRT offers?) Increases latency by using a special routing configuration for specific target IPs Throttle other systems resources, such as memory, IO, or CPU Screw around with keyboard configurations when a game is launched I would like this to work on MacOSX and Windows, but Linux would be great too. FYI I don't have a kid, but I was brainstorming with some friends and parents.

    Read the article

  • High-Level Application Architecture Question

    - by Jesse Bunch
    So I'm really wanting to improve how I architect the software I code. I want to focus on maintainability and clean code. As you might guess, I've been reading a lot of resources on this topic and all it's doing is making it harder for me to settle on an architecture because I can never tell if my design is the one that the more experienced programmer would've chosen. So I have these requirements: I should connect to one vendor and download form submissions from their API. We'll call them the CompanyA. I should then map those submissions to a schema fit for submitting to another vendor for integration with the email service provider. We'll call them the CompanyB. I should then submit those responses to the ESP (CompanyB) and then instruct the ESP to send that submitter an email. So basically, I'm copying data from one web service to another and then performing an action at the latter web service. I've identified a couple high-level services: The service that downloads data from CompanyA. I called this the CompanyAIntegrator. The service that submits the data to CompanyB. I called this CompanyBIntegrator. So my questions are these: Is this a good design? I've tried to separate the concerns and am planning to use the facade pattern to make the integrators interchangeable if the vendors change in the future. Are my naming conventions accurate and meaningful to you (who knows nothing specific of the project)? Now that I have these services, where should I do the work of taking output from the CompanyAIntegrator and getting it in the format for input to the CompanyBIntegrator? Is this OK to be done in main()? Do you have any general pointers on how you'd code something like this? I imagine this scenario is common to us engineers---especially those working in agencies. Thanks for any help you can give. Learning how to architect well is really mind-cluttering.

    Read the article

  • Amazon how does their remarkable search work?

    - by JonH
    We are working on a fairly large CRM system /knowledge management system in asp.net. The db is SQL server and is growing in size based on all the various relationships. Upper management keeps asking us to implement search much like amazon does. Right from there search you can specify to search certain objects like outdoor equipment, clothing, etc. and you can even select all. I keep mentioning to upper management that we need to define the various fields to search on. Their response is all fields...they probably look at the search and assume that it is so simple. I'm the guy who has to say hold on guys we are talking about amazon here. My question is how can amazon run a search on an "all" category. Also one of the things management here likes is the dynamic filters. For instance, searching robot brings up filters specific to a robot toy. How can I put management in check and at least come up with search functionality that works like amazon. We are using asp.net, SQL server 2008 and jquery.

    Read the article

  • Good ruby book with exercises? [closed]

    - by watabou
    I find that I learn the best with a book that has a number of exercises at the end of each chapters. A great example of this is C++ Primer Plus by Stephen Prata or Scientific Programming with Python or the Horstmann Java books. All of those books have a number of programming exercises at the end tailored to that specific chapter. I love the styles of those book and was wondering if there is anything similar for Ruby. I've extensively searched google for this and people have been suggesting different stuff like different websites like Ruby Koans and LRTHW but honestly, I've tried those and they aren't for me. I taught myself Python with the the Hard Way book and to be honest, it's not for me. Now, forgive me if I'm blunt but does anyone have a Ruby programming BOOK (i.e. not a website), that has EXERCISES in it? I do NOT want a website, unless the book is only or is freely available online by the author, similar to the Hard Way books. I would say that I'm a intermediate level programmer with only some Ruby experience but if you know of a beginner book on Ruby, that is fine too. Thanks in advance, I would really really appreciate the help.

    Read the article

  • What are benefit/drawbacks of classifying defects during a peer code review

    - by DXM
    About 3 months ago, our engineering group rolled out Review Board to be used for all peer code reviews. Today, I had a discussion with one of the people involved in that process and found out that we are already looking for a replacement (possibly something commercial) because of several missing features. One of the features that is apparently asked by many people is the ability to classify/categorize each code review comment (i.e. is it a style issue, coding convention, resource leak, logic error, crash... whatever). For those teams that regularly practice code review, is this categorization a common practice? Do you do it? have you done it in the past? Is it good/bad? On one hand, it gives the team some more metrics and possibly will indicate more specific areas where developers may potentially need to be trained in (at least that seems to be the argument). Are there other benefits? And on the other hand, and this is my concern, is that it will slow down code review process that much more. As a team lead, I've done a fairly large share of reviews, and I've always liked the ability, to highlight a chunk of code, hammer off a comment and move on as fast as possible. Although I haven't tried it personally, I have a feeling that expanding that combo box every time and scrolling/searching for the right category would feel like something is tripping you. Also if we start keeping metrics on this stuff, my other concern is that valuable code review meeting time will be spent on arguing whether something is a logic error or if it should be classified as a crash.

    Read the article

  • Calculate data transferred in a local LAN

    - by ramdaz
    How do you calculate the data flown between a computer and the gateway computer. I have a Linux router/gateway running IP Tables which routes internet traffic in a LAN. I have individual users with IP/MAC Address mapped who access Interet through the gateway computer. I would like to find out the traffic utilized by individual users. Is it possible for us to find out what kind of traffic was HTTP, SMTP, FTP etc. Is it also possible to pool the information on hourly basis, and get specific info so that I can store information in a database? I have heard of IP Accounting? Is that the right way

    Read the article

  • SharePoint 2007 Parser Error after updating master page

    - by Kelly Jones
    A few weeks ago I was updating the master page for a SharePoint 2007 (WSS) site.  The client wanted the site updated to reflect the new look and feel that is being applied to another set of sites in the organization. I created a new theme and master page, which I already wrote about here and here.  It worked well, except for a few pages on a subsite.  On those pages, I got the following error: Server Error in '/' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Code blocks are not allowed in this file.   I decided to go comb through my new master page and compare it to the existing master page that was already working.  After going through them line by line several times, I had no clue what would be causing the error because they were basically the same! It turns out, it was a combination of two things.  First, on a few of the pages in the site, there was some include code (basically an <% EVAL()%> snippet).  This was the code that was triggering my error “Code blocks are not allowed in this file”. However, this code was working fine with the previous master page. I decided to then try doing a full deployment of the site with the new master page, and it worked fine!  Apparently, if the master page is deployed using a Feature, then it is granted permission to allow code blocks, but if you upload pages either using web UI or SharePoint Designer, then the pages won’t be able to use code blocks. I haven’t been able to pin down the rules or official info about this, but I thought others might find it useful anyway.

    Read the article

  • How to structure an application that combines WCF and WPF

    - by CiaranG
    I'm in the process of learning how to use WCF (Windows Communication Foundation) to allow a client/server desktop application to communicate. The application's UI will be implemented using WPF, and we will probably use SQL Server for our database. What I'm struggling with, is understanding how to structure such an application. From what I've read, there are three components of a WCF application (which in the examples I've seen have existed as separate projects): A WCF service A WCF service host A WCF service client My question then, is - should these projects solely implement the functionality of sending/receiving data from the client/server? Would it make better sense this way? Would it make sense to create a separate WPF (Windows Presentation Foundation) project to implement the UI for the application? And so, when I need to send/receive data from the client/server, I could simply invoke the operations provided in the WCF projects that I have created? For anyone who has built similar applications previously, perhaps you could explain what worked best for you in terms of structuring your application? For example, if I create a user registration page. When the user clicks the 'Register' button, the client application will need to send the data to the server. In this case, could I just invoke the methods provided in the WCF projects to send the data? Also, what data structures worked best for you when sending/receiving data? My initial thought is sending/receiving XML containing the data. Is this an option that is easy to implement? I realise that answers to this question may well be a matter of opinion - unless there are specific best practices that I'm not aware of. Thank you

    Read the article

  • Dual Screens with Widescreen monitors?

    - by nmuntz
    I want to build a new computer and purchase new monitor(s). At my old job I had two 20" 4:3 and I absolutely loved this setup. However, the stores in my country only seem to have widescreen monitors nowadays, and the only 4:3 LCDs i have been able to find are 17". My question is: Do widescreens suck for using them as dual monitors? Can anyone with this setup comment on their experience with having multiple widescreen monitors? Would it be better to get three 17" 4:3 LCDs instead of two widescreens? If i go with widescreens, should i go with the smallest ones i can find? Purchasing a single big widescreen monitor is not an option for me, since being able to maximize an app on a specific area of the screen is a must have for me and im not willing to use "hacky" apps for this purpose that do a crappy job. Thanks in advance for your advise.

    Read the article

  • can't connect to Sql Sever Management Express 2012

    - by Rare-Man
    i installed Sql Sever Management Express 2012 , but when i try to connect in Sql management studio enviroment , i have this error . TITLE: Connect to Server Cannot connect to .. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=2&LinkId=20476 The system cannot find the file specified BUTTONS: OK ................................................................................... and in during installion i dont have option for select cluster !! this is my SQL Server Configuration Manager , my sql server service is empty ... And when get Remove a Failover Cluster Node , this error happened . http://oi57.tinypic.com/2lrvat.jpg

    Read the article

  • PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012

    - by mseika
    PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012Dear partner Oracle is pleased to invite you to the following webcasts dedicated to our EMEA partner community and designed to provide you with important news on our SPARC and Storage product portfolios. Please ensure you don't miss these unique learning opportunities! 1. How to Make Money Selling SPARC!3PM CET (2pm UKT), Tuesday, July 10, 2012 The webcast will be hosted by - Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant. Agenda: To bring our partners timely, valuable information, focused on increase in their success during selling SPARC systems. The webcast will be focused and targeted on specific topics and will last approximately in 30 minutes.You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW 2. Introduction to Oracle’s New StorageTek SL150 Modular Tape Library3pm CET (2pm UK), Thursday, July 12, 2012 This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, theSL150 Modular Tape Library.This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Note: Please join the call 10 minutes before the scheduled start time. We look forward to your participation. Best regards, Giuseppe FacchettiEMEA Partner Business Development Manager, Oracle Hardware Sales Sasan MoaveniEMEA Storage Sales Manager, Oracle Hardware Sales

    Read the article

  • PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012

    - by mseika
    PARTNER WEBCASTS: EMEA ALLIANCES AND CHANNELS HARDWARE WEBINARS, JULY 2012Dear partner Oracle is pleased to invite you to the following webcasts dedicated to our EMEA partner community and designed to provide you with important news on our SPARC and Storage product portfolios. Please ensure you don't miss these unique learning opportunities! 1. How to Make Money Selling SPARC!3PM CET (2pm UKT), Tuesday, July 10, 2012 The webcast will be hosted by - Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant. Agenda: To bring our partners timely, valuable information, focused on increase in their success during selling SPARC systems. The webcast will be focused and targeted on specific topics and will last approximately in 30 minutes.You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW 2. Introduction to Oracle’s New StorageTek SL150 Modular Tape Library3pm CET (2pm UK), Thursday, July 12, 2012 This webcast will help you to understand Oracle's New StorageTek SL150 Modular tape library which is the first scalable tape library designed for small and midsized companies that are experiencing high growth. Built from Oracle software and StorageTek library technology, it delivers a cost-effective combination of ease of use and scalability, resulting in overall TCO savings. During the webcast Cindy McCurley, from Tape Product Management will introduce you to the latest addition to the Oracle Tape Storage product portfolio, theSL150 Modular Tape Library.This 60 minutes webcast will cover the product’s features, positioning, unique selling points and a competitive overview on StorageTek. You can submit your questions via WebEx chat and there will be a live Q&A session at the end of the webcast. REGISTER NOW Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web and Conference Call. Note: Please join the call 10 minutes before the scheduled start time. We look forward to your participation. Best regards, Giuseppe FacchettiEMEA Partner Business Development Manager, Oracle Hardware Sales Sasan MoaveniEMEA Storage Sales Manager, Oracle Hardware Sales

    Read the article

  • How do I run AWS code on an EC2 instance?

    - by Marianna
    I just started with Amazon web services, and I have an EC2 instance. I downloaded the JAVA SDK and the Eclipse toolbox. I am able to run a sample program locally on my PC and connect to the Amazon databases, etc. My question is, what do I need to do to get this working on my EC2 instance? This may not even be specific to AWS. On Eclipse, I can just "Run as Application" and run any code. On the server side, what do I need to do? Should I ftp over my .java files? Should I export it to a jar and upload that? Do I need to install anything special to actually run it?

    Read the article

  • MS Exchange -- running code against outbound email

    - by user32680
    I would like to know if using MS Exchange there is a way to run code against outbound emails. The code would need to trigger on emails sent to a specific domain, connect to a database, check for an email related to the email sent, and Carbon-copy that email to the related email. What I'm trying to do: When [email protected] gets an email, his auditor [email protected] gets CC'd. Jack is in a MSSQL DB table related to his auditor's email. Are there any samples of things like this being done?

    Read the article

  • Dapper and object validation/business rules enforcement

    - by Eugene
    This isn't really Dapper-specific, actually, as it relates to any XML-serializeable object.. but it came up when I was storing an object using Dapper. Anyways, say I have a user class. Normally, I'd do something like this: class User { public string SIN {get; private set;} public string DisplayName {get;set;} public User(string sin) { if (string.IsNullOrWhiteSpace(sin)) throw new ArgumentException("SIN must be specified"); this.SIN = sin; } } Since a SIN is required, I'd just create a constructor with a sin parameter, and make it read-only. However, with a Dapper (and probably any other ORM), I need to provide a parameterless constructor, and make all properties writeable. So now I have this: class User: IValidatableObject { public int Id { get; set; } public string SIN { get; set; } public string DisplayName { get; set; } public IEnumerable<ValidationResult> Validate(ValidationContext validationContext) { // implementation } } This seems.. can't really pick the word, a bad smell? A) I'm allowing to change properties that should not be changed ever after an object has been created (SIN, userid) B) Now I have to implement IValidatableObject or something like that to test those properties before updating them to db. So how do you go about it ?

    Read the article

  • Sharing authentication methods across API and web app

    - by Snixtor
    I'm wanting to share an authentication implementation across a web application, and web API. The web application will be ASP.NET (mostly MVC 4), the API will be mostly ASP.NET WEB API, though I anticipate it will also have a few custom modules or handlers. I want to: Share as much authentication implementation between the app and API as possible. Have the web application behave like forms authentication (attractive log-in page, logout option, redirect to / from login page when a request requires authentication / authorisation). Have API callers use something closer to standard HTTP (401 - Unauthorized, not 302 - Redirect). Provide client and server side logout mechanisms that don't require a change of password (so HTTP basic is out, since clients typically cache their credentials). The way I'm thinking of implementing this is using plain old ASP.NET forms authentication for the web application, and pushing another module into the stack (much like MADAM - Mixed Authentication Disposition ASP.NET Module). This module will look for some HTTP header (implementation specific) which indicates "caller is API". If the header "caller is API" is set, then the service will respond differently than standard ASP.NET forms authentication, it will: 401 instead of 302 on a request lacking authentication. Look for username + pass in a custom "Login" HTTP header, and return a FormsAuthentication ticket in a custom "FormsAuth" header. Look for FormsAuthentication ticket in a custom "FormsAuth" header. My question(s) are: Is there a framework for ASP.NET that already covers this scenario? Are there any glaring holes in this proposed implementation? My primary fear is a security risk that I can't see, but I'm similarly concerned that there may be something about such an implementation that will make it overly restrictive or clumsy to work with.

    Read the article

  • Win 7 BSOD stucks at dumping

    - by AFO
    Recently Ive been getting frequent bsods, which always gets stuck at dumping without finishing it, with most of them showing no error messages. This never happened before i upgraded to new rams, but 2 passes of memtest86 turned out fine. Tried reinstalling windows, problem still there. Tried initiating a manual crash, and that did create a successful dump. The SSD win 7 is on doesn't seem to have any problems, crystaldisk says its healthy. Varied the cpu multiplier between stock and +500Mhz, crashed regardless. Left voltage control to auto. Im fairly sure its a hardware problem, just cant pinpoint which specific part(s). The specs- Windows 7 x64 (1 day old) 955 x4 BE c3 (running at stock) (3.5 years old) GA-970A-D3 (1.5 years old) Gigabyte 6950, unlocked to 6970 (still at 6950 speeds) (<3 years old) 2x4GB 1600 CL9 HyperX Blu (running at 11-11-11, default motherboard setting) (<1 month old) Plextor M5s (around 5 months)

    Read the article

  • Redirecting wildcard emails to one email with postfix

    - by Burning the Codeigniter
    I'm creating a bounce email system where emails can reply to messages on my site. However when the emails are sent to the user containing the previous message, the Reply-To field contains an email something like this [email protected] (which contains the ID at the end). If the user replies, the reply message will be sent back to [email protected] which of course, doesn't have its own mailbox, except the [email protected]. How would I redirect all incoming messages coming from a specific wildcard notification-message-*@mysite.com to [email protected]? I did some research, but no solid part worked, including the luser_relay = [email protected] and putting notification-message-* in the postfix aliases table, the notification@ has a Maildir, so the emails would go into it. I am using Ubuntu 11.04.

    Read the article

  • Practical considerations for HTML / CSS naming conventions (syntax)

    - by Jeroen
    Question: what are the practical considerations for the syntax in class and id values? Note that I'm not asking about the semantics, i.e. the actual words that are being used, as for example described in this blogpost. There are a lot of resources on that side of naming conventions already, in fact obscuring my search for practical information on the various syntactical bits: casing, use of interpunction (specifically the - dash), specific characters to use or avoid, etc. To sum up the reasons I'm asking this question: The naming restrictions on id and class don't naturally lead to any conventions The abundance of resources on the semantic side of naming conventions obscure searches on the syntactic considerations I couldn't find any authorative source on this There wasn't any question on SE Programmers yet on this topic :) Some of the conventions I've considered using: UpperCamelCase, mainly as a cross-over habit from server side coding lowerCamelCase, for consistency with JavaScript naming conventions css-style-classes, which is consistent with naming of css properties (but can be annoying when Ctrl+Shift+ArrowKey selection of text) with_under_scores, which I personally haven't seen used much alllowercase, simple to remember but can be hard to read for longer names UPPERCASEFTW, as a great way to annoy your fellow programmers (perhaps combined with option 4 for readability) And probably I've left out some important options or combinations as well. So: what considerations are there for naming conventions, and to which convention do they lead?

    Read the article

  • Teaching myself, as a physicist, to become a better programmer

    - by user787267
    I've always liked physics, and I've always liked coding, so when I got the offer for a PhD position doing numerical physics (details are not relevant, it's mostly parallel programming for a cluster) at a university, it was a no-brainer for me. However, as most physicists, I'm self taught. I don't have broad background knowledge about how to code in an object oriented way, or the name of that specific algorithm that optimizes the search in some kD tree. Since all my work so far has been more concerned about the physics and the scientific results, I undoubtedly have some bad habits - more so because my coding is my own, and not really teamwork. I have mostly used C since it is very straightforward and "what you write is what you get" - no need for fancy abstractions. However, I have recently switched to C++ since I'd like to learn more about the power that comes with abstraction, and it's pretty C-like (syntax-wise at least). How do I teach myself to code in a good, abstract way like a graduate in computer science? I know my code is efficient, but I want it to be elegant as well, and readable. Keep in mind that I don't have time to read several 1000-page tomes about abstract programming. I need to spend time on actual, physics related research (my supervisor would laugh at me if he knew I spent time thinking about how to program elegantly). How do I assess if my work is also good from a programmer's perspective?

    Read the article

  • Windows Phone 7 Prototype 002: Animated Page Transitions + Writeable Bitmaps

    Motion is a key part of WP7 application development. Without motion, the WP7 UI is just a bunch of text. Not nearly as exciting. To delight users, you can add some transitions between pages.  The sample app includes some storyboards to animate between two pages. Other people have noted that you can just use the transitioning content control form the SIlverlight toolkit. Peter Torr also had a nice animating frame control in his mix demo code (his blog has some other great code samples for WP7 app dev). I took some of those concepts and the code from the TransitioningContentControl to make a new animating frame control. In this prototype, the frame takes a snapshot of the old content and the new content using writeable bitmaps and animates the snapshots and then replaces those with the actual page. The benefit is smoother animation on pages with lots of controls. Otherwise, if you have a large panorama, it might not animate that cleanly.  Like the other solutions based on the TransitioningContentControl, you can centralize all the animations in one place and not have to handle them on each individual page. Peters code also had a nice snippet for choosing the animation based on the navigation direction so you could just have a forward / backward animation and not have to do anything on each page. You could also probably add some more advanced transitions using pixel shaders or make an default no transition state if you wanted to have some specific animation on a page where individual  controls transitioned out differently like some of the WP7 shell apps. Sample Code 100% guaranteed to work on my emulatorDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • how do I write a functional specification quickly and efficiently

    - by giddy
    So I just read some fabulous articles by Joel on specs here. (Was written in 2000!!) I read all 4 parts, but Im looking for some methodical approaches to writing my specs. Im the only lonely dev, working on this fairly complicated app (or family of apps) for a very well known finance company. I've never made something this serious, I started out writing something like a bad spec, an overview of some sorts, and it has wasted a LOT of my time. Ive also made 3 mockup-kinda-thingies for my client so I have a good understanding of what they want. Also released a preview (a throw away working app with the most basic workflow), and Ive only written and tested some of the very core/base systems. I think the mistake Ive been making so far is not writing a detailed spec, so Im getting to it now. So the whole thing comprises of An MVC website (for admins & data viewing) 2 Silverlight modules (For 2 specific tasks) 1 Desktop Application Im totally short on time, resources and need to get this done quick, also, need to make sure these guys read it up equally quick and painlessly. So how do I go about it, Im looking for any tips, any real world stuff, how do you guys usually do it? Do you make a mock screenie of every dialog/form/page? Im thinking of making a dummy asp.net web forms project, then filling in html files in folders and making it look like my mvc url structure. Then having a section in the spec for the website and write up a page for every URL Ive got with a screenie. For my win forms app, Ive made somewhat of a demo Win Form project, would I then put in a dialog or stucture everything as I would in the real app and then screen shot it?

    Read the article

  • Force Steam (and other programs that do not specify proxy settings) to use a proxy

    - by joshhunt
    My school requires a proxy for all internet access. If you want to use the internet, it is impossible to not use a proxy. This makes it a problem for many programs that don't seem to let you enter proxy settings. How can I use Steam when I am behind a proxy? Is it possible to somehow enter the details into a configuration file, or force it to get the settings from Internet Explorer? If not, does software exist for creating a 'virtual' network adapter which will pass all traffic (or all protocol x traffic) through the proxy? Although I am facing this specific problem on Windows 7, solutions for all operating systems are welcome.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >