Search Results

Search found 59643 results on 2386 pages for 'data migration'.

Page 677/2386 | < Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >

  • Cost of maintenance depending on paradigms

    - by Anto
    Is there any data on which paradigms allow for code which is easier/cheaper to maintain? Certainly, independantly of the chosen paradigm, good design is cheaper to maintain than bad, but there should probably be major differences coming only from the paradigm choice. Unstructured programming, for instance, generates very messy code (spaghetti code) which is expensive to maintain. In object oriented programming, implementation details are hidden and thus it should be pretty cheap to change those. In functional programming, there are no side effects, thus there is lesser risk of introducing bugs during maintainance, which should be cheaper. Is there any data on which paradigms are the most cost-efficient when coming down to maintenance? If no such data exists, what is your take on the question?

    Read the article

  • Real Time Monitoring System using .net [closed]

    - by sameer
    I need to develop the application which display the dashboard where data from various SQL DB is fetched from different servers and displayed. Now this need to happen real time we can have refresh time say 5 min. Here is my thought, suggest if anything is wrong. 1) To Develop the Windows Service to accumulate the data from various SQL Server Instance. 2) Then Persist those details into SQL DB from which Dashboard will displayed on the web page. 3) Fetch of data from Windows service will be trigger every x minutes. 4) SQL Server Instance details will be stored in SQL DB which Windows Service will be referring. Thus this approach make sense. Thanks..

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Webcast: Moving Client/Server and .NET Applications to Windows Azure Cloud

    - by Webgui
    The Cloud and SaaS models are changing the face of enterprise IT in terms of economics, scalability and accessibility . Visual WebGui Instant CloudMove transforms your Client / Server application code to run natively as .NET on Windows Azure and enables your Azure Client / Server application to have a secured-by-design plain Web or Mobile browser based accessibility. Itzik Spitzen VP of R&D, Gizmox will present a webcast on Microsoft Academy on Tuesday 8 March at 8am (USA Pacific Time) explaining how VWG bridges the gap between Client/Server applications’ richness, performance, security and ease of development and the Cloud’s economics & scalability. He will then introduce the unique migration and modernization tools which empower customers like Advanced Telemetry, Communitech, and others, to transform their existing Client/Server business application to a native Web Applications (Rich ASP.NET) and then deploy it on Windows Azure which allows accessibility from any browser (or mobile if desired by the customer). Registration page on Microsoft Academy: https://www.eventbuilder.com/microsoft/event_desc.asp?p_event=1u19p08y

    Read the article

  • JPOUG Tech Talk Next Week!

    - by roy.swonger
    Mike and I are really looking forward to our trip to Japan next week, not least because we will have the opportunity to visit the Japan Oracle User Group for a Tech Talk. You can find all of the details about this event here: JPOUG Tech Talk, Tuesday, 12-NOV-2013 The topic for our talk will be "Different ways to Upgrade, Migrate, and Consolidate with Oracle Database 12c."We will discuss changes and enhancements to database upgrade, how to move into a multitenant database environment, and new features that make database migration easier and faster than ever. Thank you to our friends at the JPOUG for making this event possible! 

    Read the article

  • Sales tracker that allows complex queries?

    - by feklee
    On a site, every click on a product should be registered by a sales tracker: price, type, etc. The sales tracker should provide an API so that complex queries can be performed, such as: Which products of a type "teapot" had a price below 20 EUR? Requirements: Recorded data should be available for querying no later than two hours after it has been recorded. For example, there are reports that Google Analytics may take up to 24h to update data. That is not acceptable. Querying doesn't need to be fast, but recording does (of course). Which sales tracker allows complex queries against collected data?

    Read the article

  • Oracle Solaris 11 Cheat Sheet

    - by Markus Weber
    Need to quickly know, or be reminded about, how to create network configuration profiles in Oracle Solaris 11 ?How to configure VLANS ?How to manipulate Zones ?How to use ZFS shadow migration ? To have those answers, and many more, neatly in front of you, we created this cheat sheet (pdf). Originally developed by Joerg Moellenkamp, the author of the very popular blog c0t0d0s0.org, and of the "Less Known Solaris Features", some more people at Oracle jumped in and added more and more very useful commands to it. And it may keep evolving, so keep checking ! The link to it can also be found on our new Oracle Solaris Evaluation page.

    Read the article

  • Oracle Streamlines Tracking of Global Carbon Footprint and Greenhouse Gas Emissions

    - by Evelyn Neumayr
    Oracle has automated its global carbon footprint and greenhouse gas emissions measurement using Oracle Environmental Accounting and Reporting. By using this solution, Oracle was able to increase organizational efficiency and reduce the need for labor intensive, manual processes in the tracking of greenhouse gas (GHG) emissions for both voluntary and legislated environmental reporting. The move to Oracle Environmental Accounting and Reporting enables Oracle to more effectively meet both internal and governmental reporting needs, while addressing the associated economic mandates for reporting emissions and sustainability efforts. Organizations across the company can now record environmental data such as energy consumed or energy generated at facilities or locations within the enterprise, and can automatically calculate corresponding GHG emissions resulting from the use of emission sources. In addition, Oracle Environmental Accounting and Reporting includes data integration from multiple applications to ensure proper representation and calculation of emissions across the globe. The result is access to fast, accurate data and reporting to help the company meet its sustainability goals.

    Read the article

  • Sending state diffs (deltas) and unreliable connections

    - by spaceOwl
    We're building a realtime multiplayer game, in which each player is responsible for reporting its state on every iteration of the game loop. The state updates are broadcasted using unreliable UDP. To minimize state data sending, we've come up with a system that will send only deltas (whatever state data that was changed). This method however is flawed, since a lost packet will mean that other players will not receive the delta, making the game behave in an unexpected way. For example: Assume that state is comprised of: { positionX, positionY, health } Frame 1 - positionX changed --> send a packet with positionX only. Frame 2 - health changed // lost ! Frame 3 - positionY changed --> send a packet with positionY only. // Other players don't know about health change. How can one overcome this issue then? sending the entire data is not always feasible.

    Read the article

  • Document Management System

    - by rjayavrp
    Is there any Document Management System in Ubuntu? I tried Alfresco, RavenDB, Owl, Document Manager. Alfresco, RavenDB are heavy. More than my requirements. Owl having source issues. Document Manager im trying to install. Should keep data on the same machine as I am looking for more of internal purpose. Should allow to upload Zip files as well. If it extracts Zip it will be a great + Should allow to send email to preconfigured email addresses Should allow to upload data of size around 100MB at one go Should maintain history of documents also deleted documents Should allow role based document access. Should be Free :) It should not do any spoofing on data. Documents are confidential. Please share your knowledge. Thanks.

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

  • Confusion on HLSL Samplers. Can I Set Samplers Inside Functions?

    - by Kyle Connors
    I'm trying to create a system where I can instance a quad to the screen, however I've run into a problem. Like I said, I'm trying to instance the quad, so I'm trying to use the same geometry several times, and I'm trying to do it in one draw call. The issue is, I want some quads to use different textures, but I can't figure out how to get the data into a sampler so I can use it in the pixel shader. I figured that since we can simply pass in the 4 bytes of our IDirect3DTexture9* to set the global texture, I can do so when passing in my dynamic buffer. (Which also stores each objects world matrix and UV data) Now that I'm sending the data, I can't figure how to get it into the sampler, and I really want to assume that it's simply not possible. Is there any way I could achieve this?

    Read the article

  • Cannot connect to Internet on 11.04 using BSNL EVDO Prithvi Card

    - by Joy
    I cannot connect to Internet using BSNL EVDO Prithvi data card. Went through some websites that offered help, installed wvdial package and tried again, but was unsuccessful. I have Read that, Ubuntu 11.04 automatically detects Data Card, You only need to configure "Network Manager" and it will work, I did exactly that, but the result is same. The OS detects the data card, and the presence of network , but it cannot login. I have read in some forums that Ubuntu 11.04 does not have support for BSNL EVDO Prithvi, is it true? I re-checked the "User ID" and "Password". Its working on Windows. Please help me fix this.

    Read the article

  • How can I fix latency problems for car game?

    - by Freddy
    Basically I'm trying to make a online car racing game for IOS using Game Center real time multiplayer. I have setup a timer that sends data every 0.02 seconds to the other player with the current position and current angle. However sometimes, it will take LONGER then these 0.02 seconds for the package to be sent and then received. In this case i have implemented a method that "calculate" what the next position should be if no position is received based on the last position and angle. However, when the data then receives for let say 0.04 seconds after, it will change back to the last position, which will result in the car "jumping" back and lag. And If i just keep ignoring the data it will never take any input from the other user. Is their any way to prevent this? I suppose this needs to be fixed with some client-sided algorithm.

    Read the article

  • How to use PostgreSQL on AWS - Ubuntu 11.10

    - by That1Guy
    I'm extremely new to cloud-computing, Linux, and PostgreSQL, so if this is a stupid question, I apologize. I've managed to create an m1.large instance running Ubuntu 11.10, connect via Putty SSH, and install PostgreSQL (sudo apt-get install postgresql), but that is as far as I've gotten. My goal is to run several python web-scraping scripts that I've written on this instance (so as not to eat up all of our bandwidth (smaller company at the moment)) and insert the scraped data into a PostgreSQL table on the instance and later retrieve that data to store on our local server (as I've heard AWS EBS is unreliable and I don't want to take chances). How can I configure PostgreSQL on my AWS instance? How can I access the data from my machine? I currently use PgAdmin3 to manage PosgreSQL on our local server. Can I use this same interface to manage PostgreSQL on my AWS instance? Any suggestions, solutions, links, etc is greatly appreciated. And again, if this is a dumb question, I apologize.

    Read the article

  • Cannot get Atheros AR9285 to work on 12.10

    - by user100449
    I've already went through all possible advices and still cannot start my Atheros AR9285 wireless card. I have a laptop Toshiba Portege Z830 where the WiFi already worked under Windows 7. But after migration on Ubuntu 12.10. I'm not able get it work. This is what I see on command lshw *-network UNCLAIMED description: Network controller product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:c0500000-c050ffff This is what I see on command rfkill list 0: Toshiba Bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 1: hci0: Bluetooth Soft blocked: yes Hard blocked: no Any idea?

    Read the article

  • How would one start a website with robust and scaleable hosting?

    - by Richard DesLonde
    This question is about hardware and hosting, and how to "bootstrap" them. If you built a really great website, how could you have it hosted at low cost but so that it reassures customers that their data is safe and available. As an example, what if I have a web application I developed for small companies to use for their accounting, a replacement for Quickbooks. Aside from getting a bunch of money from VCs or Angels, how would you be able to host this so that you could guarantee your customers that their data won't be lost, and the site will always be up so that they can always get to their data?

    Read the article

  • Does my approach for building a real time monitoring system make sense? [closed]

    - by sameer
    I am developing an application that will display a dashboard that will display data from different SQL databases. This needs to happen in almost real time, our refresh time is about 5 minutes. My approach so far is: Develop a Windows service to accumulate the data from various SQL Server instances. Persist those details into a SQL DB, from which the dashboard will display them on the web page. Trigger fetching of data from the Windows service will every x minutes. The details of the SQL Server instances will be stored in the SQL DB which the Windows service will be referring. Does my approach make sense?

    Read the article

  • Test Doubles : Do they go in "source packages" or "test packages"?

    - by sbrattla
    I've got a couple of data access objects (DefaultPersonServices.class, DefaultAddressServices.class) which is responsible for various CRUD operations in a database. A few different classes use these services, but as the services requires that a connection is established with a database I can't really use them in unit tests as they take too long. Thus, I'd like to create a test doubles for them and simply do FakePersonServices.class and FakeAddressService.class implementations which I can use throughout testing. Now, this is all good (I assume)...but my question relates to where I put the test doubles. Should I keep them along with the default implementations (aka "real" implementations) or should I keep them in a corresponding test package. The default implementations are found in Source Packages : com.company.data.services. Should I keep the test doubles here too, or should the test doubles rather be in Test Packages : com.company.data.services?

    Read the article

  • Is it illegal to forward copyrighted content? [closed]

    - by Mike
    Ok, this may be a strange question, but let's start: If I illegally download a movie (for example...) from a HTTP Web Server, there are many routers between me and the Web Server which are forwarding the data to my PC. As I understand, the owners of the routers are not legally responsible for the data they forward (please correct if I'm wrong). What if I would install a client of a peer-to-peer network on my PC and this client (peer) would forward copyrighted content received from peers to other peers? Hope someone understand what I mean ;-) Any answer or comment would be highly appreciated. Mike Update 1: I'm asking this question because I want to develop a p2p-application and try to figure out how to prevent illegal content sharing/distribution (if forwarding content is really illegal...) Update 2: What if the data forwarded by my peer is encrypted, so I'm technically not able to read and check it?

    Read the article

  • Twitter Customer Sentiment Analysis

    - by Liam McLennan
    The breakable toy that I am currently working on is a twitter customer sentiment analyser. It scrapes twitter for tweets relating to a particular organisation, applies a machine learning algorithm to determine if the content of tweet is positive or negative, and generates reports of the sentiment data over time, correlated to dates, events and news feeds. I’m having lots of fun building this, but I would also like to learn if there is a market for quantified sentiment data. So that I can start to show people what I have in mind I have created a mockup of the simplest and most important report. It shows customer sentiment over time, with important events highlighted. As the user moves their mouse to the right (forward in time) the source data area scrolls up to display the tweets from that time. The tweets are colour coded based on sentiment rating. After I started working on this project I discovered that a team of students have already built something similar. It is a lot of fun to enter your employers name and see what it says.

    Read the article

  • Help me start my Atheros AR9285 working on Ubuntu 12.10

    - by user100449
    Could you, pls, help me with my wireless card Atheros AR9285 on Ubuntu 10.12.. I've already went through all possible advices and still cannot start my wireless card. I have a laptop Toshiba Portege Z830 where Wifi've already worked under Windows 7. But after migration on Ubuntu 10.12. I'm not able get it work. My actual situation is on image bellow This is what I see on command lshw *-network UNCLAIMED description: Network controller product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:c0500000-c050ffff This is what I see on command rfkill list 0: Toshiba Bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 1: hci0: Bluetooth Soft blocked: yes Hard blocked: no Any idea? Thank you Michal

    Read the article

  • pop up html as javascript string instead of hidden div for seo [closed]

    - by user1324762
    Possible Duplicate: How bad is it to use display: none in CSS? I have heard that using display:none or visibility:hidden css properties are not a very good idea for seo purposes. I have about 4 different pop up windows to display and each one has about 20 words inside it. I can create hidden divs. Another option is to store div html elements as javascript string. In this way pop up html elements will be generated from javascript string. This will be still faster than using ajax since the data is static. Is this method absolutely safe for SEO? P.S.: I was just asking about similar question on http://stackoverflow.com/questions/12389075/storing-data-in-javascript-array-for-further-use, but this one is different, it is about static data and about SEO.

    Read the article

  • Oracle Forms hat Zukunft!

    - by A&C Redaktion
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 Diese frohe Botschaft für unsere bewährte Entwicklungsplattform Oracle Forms feiert der Oracle Gold Partner PITSS (professional it software & services) mit einer großen Roadshow in acht Städten. Von Hamburg bis Wien bieten PITSS und Oracle unter dem Titel „Oracle Forms von A bis Z“ Vorträge, Diskussion und Erfahrungsaustausch für alte und neue Forms Kunden. Jeweils von 9.30 Uhr bis 16.30 Uhr geht es unter anderem um A wie ADF oder APEX, B wie BPEL, E wie Erfahrungsaustausch, M wie Migration, W wie WebLogic Server und natürlich um Z wie Zukunft. Die Roadshow richtet sich an Software Developer, IT-Leiter, Software-Architekten und Projektleiter. Die Teilnahme ist kostenlos, unter dem jeweiligen Link können Sie sich für eine Station in Ihrer Nähe anmelden: 06.11. Hannover 07.11. Hamburg 08.11. Berlin/Potsdam 09.11. Düsseldorf 13.11. Dreieich 14.11. Stuttgart 15.11. München 29.11. Wien

    Read the article

  • Google ouvre son outil de visualisation des données statistiques au public : premier pas vers un BI en mode Cloud ?

    Google ouvre son outil de visualisation des données statistiques au public Premier pas vers un BI en mode Cloud ? Google vient de mettre à la disposition de tous son outil d'analyse derrière sa solution « Public Data Explorer ». Pour mémoire Google Public Data Explorer est un service en ligne lancé en mars 2010 par Google Labs. Il permet de visualiser des données statistiques mondiales, sous différentes représentations graphique. On y retrouve par exemple des données de la banque mondiale ou encore celle d'Eurostat. Le nouveau format de données Google Dataset Publishing Language (DSPL) basé sur le format XML, développé par Google Labs et utilisé par Public Data Explorer, fo...

    Read the article

< Previous Page | 673 674 675 676 677 678 679 680 681 682 683 684  | Next Page >