Search Results

Search found 13325 results on 533 pages for 'domain transferring'.

Page 327/533 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • Backup and Transfer Foobar2000 to a New Computer

    - by Mysticgeek
    If you are a fan of Foobar2000 you undoubtedly have tweaked it to the point where you don’t want to set it all up again on a new machine. Here we look at how to transfer Foobar2000 settings to a new Windows 7 machine. Note: For this article we are transferring Foobar2000 settings from on Windows 7 machine to another over a network running Windows Home Server.  Foobar2000 Foobar2000 is an awesome music player which is highly customizable and we’ve previously covered. Here we take a look at how it’s set up on the current machine. It’s a nothing flashy, but is set up for our needs and includes a lot of components and playlists.   Backup Files Rather than wasting time setting everything up again on a new machine, we can backup the important files and replace them on the new machine. First type or copy the following into the Explorer address bar. %appdata%\foobar2000 Now copy all of the files in the folder and store them on a network drive or some type removable media or device. New Machine Now you can install the latest version of Foobar2000 on your new machine. You can go with a Standard install as we will be replacing our backed up configuration files anyway. When it launches, it will be set with all the defaults…and we want what we had back. Browse to the following on the new machine… %appdata%\foobar2000 Delete all of the files in this directory… Then replace them with the ones we backed up from the other machine. You’ll also want to navigate to C:\Program Files\Foobar2000 and replace the existing Components folder with the backed up one. When you get the screen telling you there is already files of the same name, select Move and Replace, and check the box Do this for the next 6 conflicts. Now we’re back in business! Everything is exactly as it was on the old machine. In this example, we were moving the Foobar2000 files from a computer on the same home network. All the music is coming from a directory on our Windows Home Server so they hadn’t changed. If you’re moving these files to a computer on another machine… say your work computer, you’ll need to adjust where the music folders point to. Windows XP If you’re setting up Foobar2000 on an XP machine, you can enter the following into the Run line. %appdata%\foobar2000 Then copy your backed up files into the Foobar2000 folder, and remember to swap out the Components folder in C:\Program Files\Foobar2000. Confirm to replace the files and folders by clicking Yes to All… Conclusion This method worked perfectly for us on our home network setup. There might be some other things that will need a bit of tweaking, but overall the process is quick and easy. There is a lot of cool things you can do with Foobar2000 like rip an audio CD to FlAC. If you’re a fan of Foobar2000 or considering switching to it, we will be covering more awesome features in future articles. Download Foobar2000 – Windows Only Similar Articles Productive Geek Tips Backup or Transfer Microsoft Office 2007 Quick Parts Between ComputersBackup and Restore Internet Explorer’s Trusted Sites ListSecond Copy 7 [Review]Backup and Restore Firefox Profiles EasilyFoobar2000 is a Fully Customizable Music Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • OSI Model

    - by kaleidoscope
    The Open System Interconnection Reference Model (OSI Reference Model or OSI Model) is an abstract description for layered communications and computer network protocol design. In its most basic form, it divides network architecture into seven layers which, from top to bottom, are the Application, Presentation, Session, Transport, Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven Layer Model. A layer is a collection of conceptually similar functions that provide services to the layer above it and receives service from the layer below it. Description of OSI layers: Layer 1: Physical Layer ·         Defines the electrical and physical specifications for devices. In particular, it defines the relationship between a device and a physical medium. ·         Establishment and termination of a connection to a communications medium. ·         Participation in the process whereby the communication resources are effectively shared among multiple users. ·         Modulation or conversion between the representation of digital data in user equipment and the corresponding signals transmitted over a communications channel. Layer 2: Data Link Layer ·         Provides the functional and procedural means to transfer data between network entities. ·         Detect and possibly correct errors that may occur in the Physical Layer. The error check is performed using Frame Check Sequence (FCS). ·         Addresses is then sought to see if it needs to process the rest of the frame itself or whether to pass it on to another host. ·         The Layer is divided into two sub layers: The Media Access Control (MAC) layer and the Logical Link Control (LLC) layer. ·         MAC sub layer controls how a computer on the network gains access to the data and permission to transmit it. ·         LLC layer controls frame synchronization, flow control and error checking.   Layer 3: Network Layer ·         Provides the functional and procedural means of transferring variable length data sequences from a source to a destination via one or more networks. ·         Performs network routing functions, and might also perform fragmentation and reassembly, and report delivery errors. ·         Network Layer Routers operate at this layer—sending data throughout the extended network and making the Internet possible.   Layer 4: Transport Layer ·         Provides transparent transfer of data between end users, providing reliable data transfer services to the upper layers. ·         Controls the reliability of a given link through flow control, segmentation/de-segmentation, and error control. ·         Transport Layer can keep track of the segments and retransmit those that fail. Layer 5: Session Layer ·         Controls the dialogues (connections) between computers. ·         Establishes, manages and terminates the connections between the local and remote application. ·         Provides for full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and restart procedures. ·         Implemented explicitly in application environments that use remote procedure calls. Layer 6: Presentation Layer ·         Establishes a context between Application Layer entities, in which the higher-layer entities can use different syntax and semantics, as long as the presentation service understands both and the mapping between them. The presentation service data units are then encapsulated into Session Protocol data units, and moved down the stack. ·         Provides independence from differences in data representation (e.g., encryption) by translating from application to network format, and vice versa. The presentation layer works to transform data into the form that the application layer can accept. This layer formats and encrypts data to be sent across a network, providing freedom from compatibility problems. It is sometimes called the syntax layer. Layer 7: Application Layer ·         This layer interacts with software applications that implement a communicating component. ·         Identifies communication partners, determines resource availability, and synchronizes communication. o       When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. o       When determining resource availability, the application layer must decide whether sufficient network or the requested communication exists. o       In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer. Technorati Tags: Kunal,OSI,Networking

    Read the article

  • Do we still have a case against the goto statement? [closed]

    - by FredOverflow
    Possible Duplicate: Is it ever worthwhile using goto? In a recent article, Andrew Koenig writes: When asked why goto statements are harmful, most programmers will say something like "because they make programs hard to understand." Press harder, and you may well hear something like "I don't really know, but that's what I was taught." For that reason, I'd like to summarize Dijkstra's arguments. He then shows two program fragments, one without a goto and and one with a goto: if (n < 0) n = 0; Assuming that n is a variable of a built-in numeric type, we know that after this code, n is nonnegative. Suppose we rewrite this fragment: if (n >= 0) goto nonneg; n = 0; nonneg: ; In theory, this rewrite should have the same effect as the original. However, rewriting has changed something important: It has opened the possibility of transferring control to nonneg from anywhere else in the program. I emphasized the part that I don't agree with. Modern languages like C++ do not allow goto to transfer control arbitrarily. Here are two examples: You cannot jump to a label that is defined in a different function. You cannot jump over a variable initialization. Now consider composing your code of tiny functions that adhere to the single responsibility principle: int clamp_to_zero(int n) { if (n >= 0) goto n_is_not_negative: n = 0; n_is_not_negative: return n; } The classic argument against the goto statement is that control could have transferred from anywhere inside your program to the label n_is_not_negative, but this simply is not (and was never) true in C++. If you try it, you will get a compiler error, because labels are scoped. The rest of the program doesn't even see the name n_is_not_negative, so it's just not possible to jump there. This is a static guarantee! Now, I'm not saying that this version is better then the one without the goto, but to make the latter as expressive as the first one, we would at least have to insert a comment, or even better yet, an assertion: int clamp_to_zero(int n) { if (n < 0) n = 0; // n is not negative at this point assert(n >= 0); return n; } Note that you basically get the assertion for free in the goto version, because the condition n >= 0 is already written in line 1, and n = 0; satisfies the condition trivially. But that's just a random observation. It seems to me that "don't use gotos!" is one of those dogmas like "don't use multiple returns!" that stem from a time where the real problem were functions of hundreds or even thousand of lines of code. So, do we still have a case against the goto statement, other than that it is not particularly useful? I haven't written a goto in at least a decade, but it's not like I was running away in terror whenever I encountered one. 1 Ideally, I would like to see a strong and valid argument against gotos that still holds when you adhere to established programming principles for clean code like the SRP. "You can jump anywhere" is not (and has never been) a valid argument in C++, and somehow I don't like teaching stuff that is not true. 1: Also, I have never been able to resurrect even a single velociraptor, no matter how many gotos I tried :(

    Read the article

  • Is Cloud Security Holding Back Social SaaS?

    - by Mike Stiles
    The true promise of social data co-mingling with enterprise data to influence and inform social marketing (all marketing really) lives in cloud computing. The cloud brings processing power, services, speed and cost savings the likes of which few organizations could ever put into action on their own. So why wouldn’t anyone jump into SaaS (Software as a Service) with both feet? Cloud security. Being concerned about security is proper and healthy. That just means you’re a responsible operator. Whether it’s protecting your customers’ data or trying to stay off the radar of regulatory agencies, you have plenty of reasons to make sure you’re as protected from hacking, theft and loss as you can possibly be. But you also have plenty of reasons to not let security concerns freeze you in your tracks, preventing you from innovating, moving the socially-enabled enterprise forward, and keeping up with competitors who may not be as skittish regarding SaaS technology adoption. Over half of organizations are transferring sensitive or confidential data to the cloud, an increase of 10% over last year. With the roles and responsibilities of CMO’s, CIO’s and other C’s changing, the first thing you should probably determine is who should take point on analyzing cloud software options, providers, and policies. An oft-quoted Ponemon Institute study found 36% of businesses don’t have a cloud security policy at all. So that’s as good a place to start as any. What applications and data are you comfortable housing in the cloud? Do you have a classification system for data that clearly spells out where data types can go and how they can be used? Who, both internally and at the cloud provider, will function as admins? What are the different levels of admin clearance? Will your security policies and procedures sync up with those of your cloud provider? The key is verifiable trust. Trust in cloud security is actually going up. 1/3 of organizations polled say it’s the cloud provider who should be responsible for data protection. And when you look specifically at SaaS providers, that expectation goes up to 60%. 57% “strongly agree” or “agree” there’s more confidence in cloud providers’ ability to protect data. In fact, some businesses bypass the “verifiable” part of verifiable trust. Just over half have no idea what their cloud provider does to protect data. And yet, according to the “Private Cloud Vision vs. Reality” InformationWeek Report, 82% of organizations say security/data privacy are one of the main reasons they’re still holding the public cloud at arm’s length. That’s going to be a tough position to maintain, because just as social is rapidly changing the face of marketing, big data is rapidly changing the face of enterprise IT. Netflix, who’s particularly big on the benefits of the cloud, says, "We're systematically disassembling the corporate IT components." An enterprise can never realize the full power of big data, nor get the full potential value out of it, if it’s unwilling to enable the integrations and dataset connections necessary in the cloud. Because integration is called for to reduce fragmentation, a standardized platform makes a lot of sense. With multiple components crafted to work together, you’re maximizing scalability, optimization, cost effectiveness, and yes security and identity management benefits. You can see how the incentive is there for cloud companies to develop and add ever-improving security features, making cloud computing an eventual far safer bet than traditional IT. @mikestilesPhoto: stock.xchng

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Friday Tips #6, Part 1

    - by Chris Kawalek
    We have a two parter this week, with this post focusing on desktop virtualization and the next one on server virtualization. Question: Why would I use the Oracle Secure Global Desktop Secure Gateway? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Well, for the benefit of those who might not be familiar with client connections in Oracle Secure Global Desktop (SGD), let me back up and briefly explain. An SGD client connects to an SGD server using two distinct protocols, which, by default, require two distinct TCP ports. The first is the HTTP protocol, used by the web browser to connect to the SGD webserver on TCP port 80, or if secure connections are enabled (SSL/TLS), then TCP port 443, commonly identified as the "HTTPS" port, that is, "SSL encrypted HTTP." The second protocol from the client to the server is the Adaptive Internet Protocol, or AIP, which is used for displaying applications, transferring drive mapping data, print jobs, and so on. By default, AIP uses the TCP port 3104, or port 5307 when SSL is enabled. When SGD clients need to access SGD over a firewall, the ports that AIP requires are typically "closed"; and most administrators are reluctant, to put it mildly, to change their firewall configurations to allow AIP traffic on 3144/5307.   To avoid this problem, SGD introduced "Firewall Forwarding", a technique where, in effect, both http and AIP traffic are "multiplexed" onto a single "well-known" TCP port, that is port 443, the https port.  This is also known as single-port firewall traversal.  This technique takes advantage of the fact that, as a "well-known service", port 443 is usually "open",   allowing (encrypted) traffic to pass. At the target SGD server, the two protocols are de-multiplexed and routed appropriately. The Secure Gateway was developed in response to requirements from customers for SGD to support multi-stage DMZ's, and to avoid exposing SGD servers and the information they contain directly to connections from the Internet. The Secure Gateway acts as a reverse-proxy in the first-tier of the DMZ, accepting, authenticating, and terminating incoming client connections, and then re-encrypting the connections, and proxying them, routing them on to SGD servers, deeper in the network. The client no longer needs to know the name/IP address of the SGD servers in their network, they connect to the gateway, only. The gateway takes care of those internal network details.     The Secure Gateway supports the same "single-port firewall" capability as does "Firewall Forwarding", but offers the additional advantage of load-balancing incoming client connections amongst SGD array members, which could be cumbersome without a forward-deployed secure gateway. Load-balancing weights and policies can be monitored and tuned using the "Balancer Manager" application, and Apache mod_proxy_balancer directives.   Going forward, our architects recommend the use of the Secure Gateway over "Firewall Forwarding" for single-port firewall traversal, due to its architectural advantages, its greater flexibility and enhanced features.  Finally, it should be noted that the Secure Gateway is not separately priced; any licensed SGD customer may use the Secure Gateway component at no additional cost.   For more information, see the "Secure Gateway Administrator's Guide".

    Read the article

  • Maven error: Unable to get resource / Server redirected too many times

    - by tewe
    Our proxy went down and I tried to update dependencies with maven while it was off. Since then I can't download anything with maven. I get this error for everything. I tried -U option, deleting my local repository and tried different maven version (2.0.9, 2.2.1) but it doesn't work. Any idea how to solve this? Earlier it also said 'repository will be blacklisted' to all of them. Downloading: http://repo1.maven.org/maven2/org/apache/maven/plugins/maven-compiler-plugin/2.1/maven-compiler-plugin-2.1.pom [WARNING] Unable to get resource 'org.apache.maven.plugins:maven-compiler-plugin:pom:2.1' from repository central (http://repo1.maven.org/maven2): Error transferring file: Server redirected too many times (20) org.apache.maven.plugins:maven-compiler-plugin:pom:2.1 from the specified remote repositories: jboss-snapshot (http://snapshots.jboss.org/maven2), central (http://repo1.maven.org/maven2), JBoss Repo (http://repository.jboss.com/maven2), spring-maven-snapshot (http://maven.springframework.org/snapshot), com.springsource.repository.bundles.external (http://repository.springsource.com/maven/bundles/external), com.springsource.repository.bundles.snapshot (http://repository.springsource.com/maven/bundles/snapshot), jboss (http://repository.jboss.com/maven2), com.springsource.repository.bundles.release (http://repository.springsource.com/maven/bundles/release), jboss-snapshot-plugins (http://snapshots.jboss.org/maven2), com.springsource.repository.bundles.milestone (http://repository.springsource.com/maven/bundles/milestone), jboss-plugins (http://repository.jboss.com/maven2) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:228) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:90) at org.apache.maven.project.DefaultMavenProjectBuilder.findModelFromRepository(DefaultMavenProjectBuilder.java:558) ... 25 more Caused by: org.apache.maven.wagon.ResourceDoesNotExistException: Unable to download the artifact from any repository at org.apache.maven.artifact.manager.DefaultWagonManager.getArtifact(DefaultWagonManager.java:404) at org.apache.maven.artifact.resolver.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:216) ... 27 more

    Read the article

  • using text and ntext SQL Datatypes in RPG

    - by David Stratton
    I'll preface this with saying that I'm a .NET developer, and am NOT an RPG developer. I'm working with one of our RPG developers to come up with a solution, so any suggestions you provide will get passed to him. We have a scenario where we want our iSeries to read from a SQL Server database. One of the columns is a TEXT column. IN RPG, there is no equivalent data type to use for this. We've gone back and forth on this, and our current plan is to change course, and have our SQL Server write out a text file, which the iSeries can pick up and parse. This is, however, a last resort option, as the data in the file is sensitive, and we'd like to avoid the additional security overhead. We've already got the SQL Server locked down as tight as possible (one user only has read access to this, and that user is an iSeries user.) We don't want to have to worry about transferring files back and forth. However, at this point, we see no other option. We have no in-house Java developers, and need to do this in RPG. So I'm wondering if there are any RPG developers out there who have faced this situation and have any advice.

    Read the article

  • Resources related to data-mining and gaming on social networks

    - by darren
    Hi all I'm interested in the problem of patterning mining among players of social networking games. For example detecting cheaters of a game, given a company's user database. So far I have been following the usual recipe for a data mining project: construct a data warehouse that aggregates significant information select a classifier, and train it with a subsectio of records from the warehouse validate classifier with another test set lather, rinse, repeat Surprisingly, I've found very little in this area regarding literature, best practices, etc. I am hoping to crowdsource the information gathering problem here. Specifically what I'm looking for: What classifiers have worked will for this type of pattern mining (it seems highly temporal, users playing games, users receiving rewards, users transferring prizes etc). Are there any highly agreed upon attributes specific to social networking / gaming data? What is a practical amount of information that should be considered? One problem I've run into is data overload, where queries and data cleansing may take days to complete. Related to point above, what hardware resources are required to produce results? I've found it difficult to estimate the amount of computing power I will require for production use. It has become apparent that a white box in the corner does not have enough horse-power for such a project. Are companies generally resorting to cloud solutions? Are they buying clusters? Basically, any resources (theoretical, academic, or practical) about implementing a social networking / gaming pattern-mining program would be very much appreciated. Thanks.

    Read the article

  • RIA Services: Inserting multiple presentation-model objects

    - by nlawalker
    I'm sharing data via RIA services using a presentation model on top of LINQ to SQL classes. On the Silverlight client, I created a couple of new entities (album and artist), associated them with each other (by either adding the album to the artist's album collection, or setting the Artist property on the album - either one works), added them to the context, and submitted changes. On the server, I get two separate Insert calls - one for the album and one for the artist. These entitites are new so their ID values are both set to the default int value (0 - keep in mind that depending on my DB, this could be a valid ID in the DB) because as far as I know you don't set IDs for new entities on the client. This all would work fine if I was transferring the LINQ to SQL classes via my RIA services, because even though the Album insert includes the Artist and the Artist insert includes the Album, both are Entities and the L2S context recognizes them. However, with my custom presentation model objects, I need to convert them back to the LINQ to SQL classes maintaining the associations in the process so they can be added to the L2S context. Put simply, as far as I can tell, this is impossible. Each entity gets its own Insert call, but there's no way you can just insert the one entity because without IDs the associations are lost. If the database used GUID identifiers it would be a different story because I could set those on the client. Is this possible, or should I be pursuing another design?

    Read the article

  • MVC ASP.NET, ObjectContext and Ajax. Weird Behaviour

    - by fabianadipolo
    Hi, i've been creating a web application in mvc asp.net. I have three different project/solutions. One solution contains the model in EF (DAL) and all the methods to add, update, delete and query the objects in the model, the objectcontext is managed here in a per request basis. Other solution contains a content management system in wich authorized users insert, delete, update and access objects through the DAL mentioned before. And the last solution contains the web page that is accessed by all users and where the only operations executed are selects, no update, inserts or deletes here. All the selects are executed against the DAL mentioned before (the first solution). The problem here is that i'm not sure whether an HttpContext lifespan ObjectContext is the best solution. I have a lot of ajax calls in my web app and i'm not sure if an httpcontext could interfere with the performance of the application. I've been noticed that in some cases, specially when someone is working in the content manager inserting, updating or deleting, when you try to click on any link of the user web application (the web app that is accessed by any user - the third one that i mentioned before) the web page freezes and it remains stucked transferring data. In order to stop that behaviour you have to stop and refresh or click several times on the link. Excuse me for my bad english. I hope you could understand and could help me to solve this issue. Thanx in advance.

    Read the article

  • Posting xml from classic asp to asp.net

    - by Chris Dunaway
    I apologize if this has been asked before. I searched and didn't find anything that matched my situation. Also bear in mind I am fairly new to asp/asp.net development. My current project is a relatively simple e-commerce site. The customer will connect to the site, select products, input shipping and billing information, payment information (credit card) and submit the order. The project is being split into two parts: The store front which includes displaying the items and taking the customer's shipping and billing information and the payment site which will collect the customers credit card, compute tax, and save the order into the company's system. The reason that the site was split up, was that our side (payment side) already has facilities for credit card handling and tax computation. There may also be some regulatory issues that the store front side does not want to deal with (which we already do). I'm working on the payment portion of the app and I am using asp.net. The store front side is being written in classic asp (not my decision). Each part will be hosted on different servers. The problem I am having is transferring the contents of the "shopping cart" to our app so that we can collect the cc info and submit the order. We had thought that the classic asp could somehow post an xml fragment which contains the billing/shipping info and the items selected. Our side would display a summary of the order, securely collect the credit card info, and submit the order to our system. But I have been unable to post or send the xml from a classic asp on one server, to our asp.net application on another. It all works just fine when I test on the same server. How can I post (or otherwise transfer) the shopping cart data from classic asp to asp.net across server boundaries and transfer control to the asp.net application? As I said, I am new to web development, so this is proving quite a challenge for me. Thanks

    Read the article

  • Archive log transfer from Oracle 9i to Oracle 10g

    - by Jamie Love
    Hi all, I have a situation where I need to transfer Oracle 9i archive logs to an Oracle 10g database, from where they are to be mined by a log-miner and then used by an Oracle streams capture/apply processes. (Oracle 9 archive logs can be read by the Oracle 10 logminer - I can manually copy the archive logs across, manually register them and have them mined, captured then applied). The difficulty is that the way Oracle does archive log transfer changed quite a bit between 9i and 10g and setting up the 9i database to transfer to the remote machine like so: log_archive_dest_state_2 = enable log_archive_dest_2 = "service=OTHERMACHINE arch optional" no longer works. I get this in the 9i logs: *** 2009-05-22 04:03:44.149 RFS network connection lost at host 'OTHERMACHINE' Error 3113 attaching RFS server to standby instance at host 'OTHERMACHINE' Error 3113 attaching to destination LOG_ARCHIVE_DEST_2 standby host 'OTHERMACHINE' Heartbeat failed to connect to standby 'OTHERMACHINE'. Error is 3113. *** 2009-05-22 04:03:44.150 kcrrfail: dest:2 err:3113 force:0 ORA-03113: end-of-file on communication channel And in the 10g log I get: Fri May 22 04:07:42 2009 WARNING: inbound connection timed out (ORA-3136) My question is: Does anyone know how I could configure my 9i or 10g server such that the 10g server will accept the 9i connection in such a way that I can transfer the 9i archive logs to the 10g server. It would be a bonus if the archive logs would be automatically registered in the 10g server. Note I have not set up a full DataGuard configuration here and the 10g database is not a secondary server. Thanks for any suggestions. Edit Note that I can log on to the 10g server from the 9i server via sqlplus, so connectivity is not the problem Edit 2 After a large amount of time searching for a solution, I've finally decided that such a mechanism doesn't work, and that a non-Oracle method of transferring archive logs from 9i to 10g will need to be used (e.g. rsync).

    Read the article

  • Setting the comment of a column to that of another column in Postgresql

    - by dland
    Suppose I create a table in Postgresql with a comment on a column: create table t1 ( c1 varchar(10) ); comment on column t1.c1 is 'foo'; Some time later, I decide to add another column: alter table t1 add column c2 varchar(20); I want to look up the comment contents of the first column, and associate with the new column: select comment_text from (what?) where table_name = 't1' and column_name = 'c1' The (what?) is going to be a system table, but after having looked around in pgAdmin and searching on the web I haven't learnt its name. Ideally I'd like to be able to: comment on column t1.c1 is (select ...); but I have a feeling that's stretching things a bit far. Thanks for any ideas. Update: based on the suggestions I received here, I wound up writing a program to automate the task of transferring comments, as part of a larger process of changing the datatype of a Postgresql column. You can read about that on my blog.

    Read the article

  • RIA: how to get functionality, not a data

    - by Budda
    On the server side I have the following class: public class Customer { [Key] public int Id { get; set; } public string FirstName { get; set; } public string SecondName { get; set; } public string FullName { get { return string.Concat(FirstName, " ", SecondName); } } } The problem is that each field is calculated and transferred to the client, for example 'FullName' property: [DataMember()] [Editable(false)] [ReadOnly(true)] public string FullName { get { return this._fullName; } set { if ((this._fullName != value)) { this.ValidateProperty("FullName", value); this.OnFullNameChanging(value); this._fullName = value; this.RaisePropertyChanged("FullName"); this.OnFullNameChanged(); } } } Instead of data transferring (that is traffic consuming, in some cases it introduces significant overhead). I would like to have a calculation on the client side. Is this possible without manual duplication of the property implementation? Thank you.

    Read the article

  • Why an auto_ptr can "seal" a container

    - by icephere
    auto_ptr on wikipedia said that "an auto_ptr containing an STL container may be used to prevent further modification of the container.". It used the following example: auto_ptr<vector<ContainedType> > open_vec(new vector<ContainedType>); open_vec->push_back(5); open_vec->push_back(3); // Transfers control, but now the vector cannot be changed: auto_ptr<const vector<ContainedType> > closed_vec(open_vec); // closed_vec->push_back(8); // Can no longer modify If I uncomment the last line, g++ will report an error as t05.cpp:24: error: passing ‘const std::vector<int, std::allocator<int> >’ as ‘this’ argument of ‘void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = int, _Alloc = std::allocator<int>]’ discards qualifiers I am curious why after transferring the ownership of this vector, it can no longer be modified? Thanks a lot!

    Read the article

  • C# - parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • BITS, TakeOwnership, and Kerberos / Windows Integrated Authentication

    - by Charlie Flowers
    We're using BITS to upload files from machines in our retail locations to our servers. BITS will stop transferring a file if the user who owns the BITS job logs off. Therefore, we're using a Windows Service running as LocalSystem to submit the jobs to BITS and be the job owner. This allows transfers to continue 24/7. However, it raises a question about authentication. We want the BITS server extensions in IIS to use Kerberos to authenticate the client machine. As far as I can tell, that leaves us with only 2 options, both of which are not ideal: Either we create an "ImageUploader" account and store its username/password in a config file that the Windows Service uses as credentials for the BITS job, or we ask the logged on user who creates the BITS job for his password, and then use his credentials for the BITS job. I guess the third option is not to use Kerberos, and maybe go with Basic Auth plus SSL. I'm sure I'm wrong and there's a better option. Is there? Thanks in advance.

    Read the article

  • How should I format an HTTPService result in Flex so that it is accessible for a pie chart?

    - by Eric Reynolds
    I feel like this should be simple, however everywhere I look online someone does something different. I am doing a small charting application to track download statistics, and want to put the data into a pie chart. I am new to Flex, so if my code is horrible I apologize. <s:HTTPService id="service" url="admin/stats/totalstats.php" fault="service_faultHandler(event)" result="service_resultHandler(event)" /> What is the best resultFormat for this purpose, and if I am assigning the returned value to an ActionScript variable, should it be an ArrayList? ArrayCollection? Heres a sample of the XML being returned from the HTTPService <DownloadStats> <year count="24522" year="2008"> <month count="20" month="5" year="2008" full="May 2008"> <day count="2" month="5" day="20" year="2008"/> <day count="1" month="5" day="21" year="2008"/> <day count="9" month="5" day="22" year="2008"/> <day count="1" month="5" day="23" year="2008"/> <day count="1" month="5" day="29" year="2008"/> <day count="1" month="5" day="30" year="2008"/> <day count="5" month="5" day="31" year="2008"/> </month> ... </year> <DownloadStats> Any help is appreciated, Thanks, Eric R. EDIT: I decided that it would be helpful to see how I am transferring the data to the chart to make sure I am not doing something wrong there either. <mx:PieChart id="pieChart"> <mx:PieSeries nameField="year" field="count" labelPosition="callout" displayName="Total" dataProvider="{graphData}"/> </mx:PieChart>

    Read the article

  • SL3 Nav framework + MVVM ligh

    - by Murari
    Hi All, Thanks for taking time to read through my question. Any guidance is really appreciated. I am using SL3 Navigation framework in my LOB application. I m currently using MVVM Light as the framework guidance. I have a datagrid consisting of employees and when the "user" clicks on "employee id link" in the datagrid, i am transferring the user to "Edit Page". I would like to transfer the "employee id" as query parameter to "edit page". The issue here is: I can access the query parameter in the EditStaffView.xaml.cs - which i don't want to do. protected override void OnNavigatedTo(NavigationEventArgs e) { if (this.NavigationContext.QueryString.ContainsKey("staffcode")) { string title = this.NavigationContext.QueryString["staffcode"]; } } I would like to retrieve the query parameter in my viewmodel and based on the query parameter, i will perform certain operations. When the constructor is called I would like the "view" to pass the staffid as shown below public EditStaffViewModel(int staffId) { LoadData(staffId); } I am constructing my hyperlink buttons in the datagrid dyanmically as shown below: staffListingModel.HyperlinkNavigationUri = string.Format("{0}{1}", NavigationUri.DataEntryEditStaff,"?staffcode={" + staffListingModel.StaffCode + "}"); and XAML looks HyperlinkButton Content="{Binding StaffCode,Mode=TwoWay}" NavigateUri="{Binding HyperlinkNavigationUri}" HyperlinkButton Any idea how to do this ?? Thanks for the help. Murari

    Read the article

  • 'mvn install' does not work & shows error while attempting to download

    - by Raj
    I am using maven 3.0.2 version. mvn --version works fine on command line but when I try mvn install it does not work & shows following error. Z:\dev\hector rantavmvn install [INFO] Scanning for projects... Downloading: http://repo1.maven.org/maven2/junit/junit/3.8.2/junit-3.8.2.jar [ERROR] The build could not read 1 project - [Help 1] [ERROR] [ERROR] The project me.prettyprint:hector:0.7.0-24-SNAPSHOT (Z:\dev\hector ran tav\pom.xml) has 1 error [ERROR] Unresolveable build extension: Plugin org.apache.maven.scm:maven-scm -manager-plexus:1.3 or one of its dependencies could not be resolved: Could not transfer artifact junit:junit:jar:3.8.2 from/to central (http://repo1.maven.org/ maven2): Error transferring file: repo1.maven.org: Unknown host repo1.maven.org - [Help 2] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e swit ch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please rea d the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildin gException [ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/PluginResoluti onException Z:\dev\hector rantav Please let me how I can fix this.

    Read the article

  • parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • Fatal error when using FILE* in Windows from DLL

    - by AlannY
    Hi there. Recently, I found a problem with Visual C++ 2008 compiler, but using minor hack avoid it. Currently, I cannot use the same hack, but problem exists as in 2008 as in 2010 (Express). So, I've prepared for you 2 simple C file: one for DLL, one for program: DLL (file-dll.c): #include <stdio.h> __declspec(dllexport) void print_to_stream (FILE *stream) { fprintf (stream, "OK!\n"); } And for program, which links this DLL via file-dll.lib: Program: #include <stdio.h> __declspec(dllimport) void print_to_stream (FILE *stream); int main (void) { print_to_stream (stdout); return 0; } To compile and link DLL: cl /LD file-dll.c To compile and link program: cl file-test.c file-dll.lib When invoking file-test.exe, I got the fatal error (similar to segmentation fault in UNIX). As I said early, I had that the same problem before: about transferring FILE* pointer to DLL. I thought, that it may be because of compiler mismatch, but now I'm using one compiler for everything and it's not the problem. ;-( What can I do now? UPD: I've found solution: cl /LD /MD file-dll.c cl /MD file-test.c file-dll.lib The key is to link to dynamic library, but (I did not know it) by default it links staticaly and (hencefore) error occurs (I see why). P.S. Thanks for patience.

    Read the article

  • PHP Redirection with Post Parameters

    - by arik-so
    Hello, I have a webpage. This webpage redirects the user to another webpage, more or less the following way: <form method="post" action="anotherpage.php" id="myform"> <?php foreach($_GET as $key => $value){ echo "<input type='hidden' name='{$key}' value='{$value}' />"; } ?> </form> <script> document.getElementById('myform').submit(); </script> Well, you see, what I do is transferring the GET params into POST params. Do not tell me it is bad, I know that myself, and it is not exactly what I really do, what is important is that I collect data from an array and try submitting it to another page via POST. But if the user has JavaScript turned off, it won't work. What I need to know: Is there a way to transfer POST parameters by means of PHP so the redirection can be done the PHP way (header('Location: anotherpage.php');), too? It is very important for me to pass the params via POST. I cannot use the $_SESSION variable because the webpage is on another domain, thus, the $_SESSION variables differ. Anyway, I simply need a way to transfer POST variables with PHP ^^ Thanks in advance!

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >