Search Results

Search found 3353 results on 135 pages for 'volume activation'.

Page 98/135 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • Developing a Configurable Pricing Program

    - by Ben DeMott
    The organization I work at has some interesting requirements when it comes to pricing for online commerce. Currently the developers write different 'pricing rules' and those rules can be applied to our items based on attributes of the items. For Example: INPUTS: [cost, sug_retail, discontinued, warehouse_qty, orderable_qty, brand, type, days_available, shipping_rate, weight, map_protected, map_discount] MATCH: brand=x, warehouse_qty 1, discontinued=True, map_protected=False SET: retail_price = (sug_retail * 0.95), offer_price1 = (cost * 1.25 + shipping_rate) I am looking to allow the merchandising team to have more control over the pricing and formulas - they are afterall technical enough to write excel formulas. I've been looking at writing a desktop application that uses something like numexpr http://code.google.com/p/numexpr/ or http://sympy.org/en/index.html to allow non-programmers to integrate their own logic into our pricing backend. We have multiple price-tiers we have to set, for multiple markets, so an elegant solution is needed. It's getting frustrating for the dev team to continually tweak/manage all of the pricing rules (we sell over 200 brands in 3 markets). My question is; does this seem like a decent approach? Can you think of a better way to parse string-mathematical-grammer? Can you think of a different way for users to provide formula's to integrate into a automated pricing system? Does anyone know of any examples of existing applications that do this? Excel, and Access are out of the question - the volume of data we manipulate has already proven the need to automate it - now we just need some visibility into that automation.

    Read the article

  • Versioning millions of files with distributed SCM

    - by C. Lawrence Wenham
    I'm looking into the feasibility of using off-the-shelf distributed SCMs such as Git or Mercurial to manage millions of XML files. Each file would be a commercial transaction, such as a purchase order, that would be updated perhaps 10 times during the lifecycle of the transaction until it is "done" and changes no more. And by "manage", I mean that the SCM would be used to not just version the files, but also to replicate them to other machines for redundancy and transfer of IP. Lets suppose, for the sake of example, that a goal is to provide good performance if it was handling the volume of orders that Amazon.com claimed to have at its peak in December 2010: about 150,000 orders per minute. We're expecting the system to be distributed over many servers in order to get reasonable performance. We're also planning to use solid-state drives exclusively. There is a reason why we don't want to use an RDBMS for primary storage, but it's a bit beyond the scope of this question. Does anyone have first-hand experience with the performance of distributed SCMs under such a load, and what strategies were used? Open-source preferred, since the final product is to be FOSS, too.

    Read the article

  • What can I do about rsync of large files killing my laptop's wifi connection

    - by David Dean
    When I run a rsync to backup my home folder over the network like so: rsync -avhz --progress --delete /home/dbdean/ [email protected]:/home/backups/david/ I seem to have problems with my quite large .VirtualBox/HardDisks/Windows XP.vdi file. Occasionally the wifi will silently fail (the transfer stops, and any other network access is broken). If I reconnect the wifi to my network before the transfer times out, it happily keeps going (and other network access is back), but I can't just leave it unattended most of the time, as I have to keep an eye on it. I'm guessing this is probably a bug in the wireless card related to a particularly high sustained volume of network usage, but I'm not really sure where to start with diagnosing this problem so that I can provide a good bug report. Or it could be something else, I guess. Any help would be appreciated. My network card is an Atheros Communications Inc. AR9285, as lspci -k shows: 43:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Hewlett-Packard Company Device 3040 Kernel driver in use: ath9k Kernel modules: ath9k

    Read the article

  • Entity Framework with large systems - how to divide models?

    - by jkohlhepp
    I'm working with a SQL Server database with 1000+ tables, another few hundred views, and several thousand stored procedures. We are looking to start using Entity Framework for our newer projects, and we are working on our strategy for doing so. The thing I'm hung up on is how best to split the tables into different models (EDMX or DbContext if we go code first). I can think of a few strategies right off the bat: Split by schema We have our tables split across probably a dozen schemas. We could do one model per schema. This isn't perfect, though, because dbo still ends up being very large, with 500+ tables / views. Another problem is that certain units of work will end up having to do transactions that span multiple models, which adds to complexity, although I assume EF makes this fairly straightforward. Split by intent Instead of worrying about schemas, split the models by intent. So we'll have different models for each application, or project, or module, or screen, depending on how granular we want to get. The problem I see with this is that there are certain tables that inevitably have to be used in every case, such as User or AuditHistory. Do we add those to every model (violates DRY I think), or are those in a separate model that is used by every project? Don't split at all - one giant model This is obviously simple from a development perspective but from my research and my intuition this seems like it could perform terribly, both at design time, compile time, and possibly run time. What is the best practice for using EF against such a large database? Specifically what strategies do people use in designing models against this volume of DB objects? Are there options that I'm not thinking of that work better than what I have above? Also, is this a problem in other ORMs such as NHibernate? If so have they come up with any better solutions than EF?

    Read the article

  • Having trouble installing guest additions for a Xubuntu 13.10 guest OS in Virtualbox 4.2.10

    - by Duval Pearson III
    I am using Ubuntu 13.04 64-BIT as my host operating system running Virtual Box 4.2.10. I get this message when I tell virtualbox to install guest additions(CTRL+D),mounting the volume in the guest OS and run the VBoxLinuxAdditions.run file using root by: sudo ./VBoxLinuxAdditions.run It starts and then comes to these error messages: Verifying archive integrity... All good. Uncompressing VirtualBox 4.2.10 Guest Additions for Linux.......... VirtualBox Guest Additions installer Removing existing VirtualBox non-DKMS kernel modules ...done. Building the VirtualBox Guest Additions kernel modules The headers for the current running kernel were not found. If the following module compilation fails then this could be the reason. Building the main Guest Additions module ...done. Building the shared folder support module ...fail! (Look at /var/log/vboxadd-install.log to find out what went wrong) Doing non-kernel setup of the Guest Additions ...done. Installing the Window System drivers Warning: unknown version of the X Window System installed. Not installing X Window System drivers. Installing modules ...done. Installing graphics libraries and desktop services components ...done. allusers@allusers-VirtualBox:/media/allusers/VBOXADDITIONS_4.2.10_84104$ I followed everything from the official VirtualBox manual on installing guest additions for linux I used some other commands such as: sudo apt-get install build-essential linux-headers-$(uname -r) dkms and: sudo apt-get install virtualbox-guest-x11 I rebooted after executing those command and it still wont work. It still says the kernel modules are missing and the window is not seamless. Any idea what could be the problem?

    Read the article

  • Software for video subscription service

    - by Clinton Blackmore
    I'd like to sell instructional videos over the web. Primarily, I'd like uses to subscribe to the site and be allowed access to videos over the internet. Secondarily, I might sell DVDs for those who have poor internet connections or would like a physical copy, or possibly I'd sell eBooks and the like in the future. Regarding the subscriptions: I'd like a system that automatically sends out e-mails when it is time to renew I'd like to be able to offer free trials Users without a free trial or subscription should not be able to access the content Incidentally, I plan to host videos on my current web host and move them to a CDN when volume (and capital) make this a good idea. While I have no intention to go crazy with the DRM, it seems expedient not to directly link to the files -- how can I link to them indirectly? It would be nice to support multiple payment processors -- specifically, I'd like to avoid a PayPal only approach. Are there any web applications (or plugins) you'd recommend for something like this? While I've set up and administered several web technologies, I've never done anything with e-commerce. I see there are possibilities like osCommerce, one friend recommends using WordPress with plugins, and it really appears that for any given CMS, you can graft on components like this, although I imagine that not all are created equal. As I'm not tied to a particular web application (and, while open source software that can run on a LAMP [p=perl, python, php] stack is preferable), I'd like to make a good choice at the beginning.

    Read the article

  • Getting Optimal Performance from Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    Performance tuning and optimization in E-Business Suite environments can involve many different components and diagnostic tools.  Samer Barakat, Senior Architect in our Applications Performance group, held an OpenWorld 2013 session that covered: Performance triage, analysis and diagnostic tools Optimizing the E-Business Suite application tier, including Concurrent Manager Optimizing the E-Business Suite database tier Optimizing the E-Business Suite on Real Application Clusters (RAC) E-Business Suite on engineered systems, including Exadata and Exalogic Optimizing E-Business Suite data management, including archiving and purging  The Applications Performance group works with the world's largest E-Business Suite customers to isolate and resolve performance bottlenecks. This team has helped tune the E-Business Suite environments of world's largest companies to handle staggering amounts of transactional volume in multi-terabyte databases.  This group also publishes our official Oracle Apps benchmarks, white papers, and performance metrics. This is an essential set of tips and techniques that all EBS sysadmins and DBAs can use to improve the performance of their environments: Getting Optimal Performance from Oracle E-Business Suite (PDF, 1.7 MB) OpenWorld 2013 presentations are only available for approximately six months -- until ~March 2013.  Download this one while it's still available. Related Articles E-Business Suite Technology Sessions at OpenWorld 2013 OAUG/Collaborate Recap: Best Practices for E-Business Suite Performance Tuning

    Read the article

  • What is the evidence that an API has exceeded its orthogonality in the context of types?

    - by hawkeye
    Wikipedia defines software orthogonality as: orthogonality in a programming language means that a relatively small set of primitive constructs can be combined in a relatively small number of ways to build the control and data structures of the language. The term is most-frequently used regarding assembly instruction sets, as orthogonal instruction set. Jason Coffin has defined software orthogonality as Highly cohesive components that are loosely coupled to each other produce an orthogonal system. C.Ross has defined software orthogonality as: the property that means "Changing A does not change B". An example of an orthogonal system would be a radio, where changing the station does not change the volume and vice-versa. Now there is a hypothesis published in the the ACM Queue by Tim Bray - that some have called the Bánffy Bray Type System Criteria - which he summarises as: Static typings attractiveness is a direct function (and dynamic typings an inverse function) of API surface size. Dynamic typings attractiveness is a direct function (and static typings an inverse function) of unit testing workability. Now Stuart Halloway has reformulated Banfy Bray as: the more your APIs exceed orthogonality, the better you will like static typing My question is: What is the evidence that an API has exceeded its orthogonality in the context of types? Clarification Tim Bray introduces the idea of orthogonality and APIs. Where you have one API and it is mainly dealing with Strings (ie a web server serving requests and responses), then a uni-typed language (python, ruby) is 'aligned' to that API - because the the type system of these languages isn't sophisticated, but it doesn't matter since you're dealing with Strings anyway. He then moves on to Android programming, which has a whole bunch of sensor APIs, which are all 'different' to the web server API that he was working on previously. Because you're not just dealing with Strings, but with different types, the API is non-orthogonal. Tim's point is that there is a empirical relationship between your 'liking' of types and the API you're programming against. (ie a subjective point is actually objective depending on your context).

    Read the article

  • KERPOOOOW!

    - by Matt Christian
    Recently I discovered the colorful world of comic books.  In the past I've read comics a few times but never really got into them.  When I wanted to start a collection I decided either video games or comics yet stayed away from comics because I am less familiar with them. In any case, I stopped by my local comic shop and picked up a few comics and a few trade paperbacks.  After reading them and understanding their basic flow I began to enjoy not only the stories but the art styles hiding behind those little white bubbles of text (well, they're USUALLY white).  My first stop at the comic store I ended up with: - Nemesis #1 (cover A) - Shuddertown #1 (cover A I think) - Daredevil: King of Hell's Kitchen Trade Paperback - Peter Parker: Spiderman - One Small Break Trade Paperback It took me about 3-4 days to read all of that including re-reading the single issues and glancing over the beginning of Daredevil again.  After a week of looking around online I knew a little more about the comics I wanted to pick up and the kind of art style I enjoyed.  While Peter Parker: Spiderman was ok, I really enjoyed the detailed, realistic look of Daredevil and Shuddertown. Now, a few years back I picked up the game The Darkness for PS3.  I knew it was based off a comic but never read the comic.  I decided I'd pick up a few issues of it and ended up with: - The Darkness #80 (cover A) - The Darkness #81 (cover A) - The Darkness #82 (cover A) - The Darkness #83 (cover A) - The Darkness Shadows and Flame #1  (one-shot; cover A) - The Darkness Origins: Volume 1 Trade Paperback (contains The Darkness #1-6) - New Age boards and bags for storing my comics The Darkness is relatively good though jumping from issue #6 to issue #80 I lost a bit on who the enemy in the current series is.  I think out of all of them, issue #83 was my favorite of them. I'm signed up at the local shop to continue getting Nemesis, The Darkness, and Shuddertown, and I'll probably pick up a few different ones this weekend...

    Read the article

  • Tuning B2B Server Engine Threads in SOA Suite 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g has a number of parameters that can be tweaked to tune the engine for handling high volumes of messages. These parameters are also known as B2B server properties and managed via the EM console.  This note highlights one aspect of the tuning exercise and describes the different threads, that can be configured to tune the performance of a B2B server. Symptoms The most common indicator of a B2B engine in need of a tuning is reflected in the constant build-up of messages in an internal JMS queue within the B2B server. It is called B2B_EVENT_QUEUE and can be monitored via the Weblogic server console. Whenever such a behaviour is seen, it usually results in general degradation of performance. Remedy There could be many contributing factors behind a B2B server's degradation of performance. However, one of the first places to tune the server from the out-of-the-box, default configuration is to change the number of internal engine threads allocated within the B2B server. Usually the default configuration for the B2B server engine threads is not suitable for high-volume of messaging loads. So, it is necessary to increase the counts for 3 types of such threads, by specifying the appropriate B2B server properties via the EM console, namely, Inbound - b2b.inboundThreadCount Outbound - b2b.outboundThreadCount Default - b2b.defaultThreadCount The function of these threads are fairly self-explanatory. In other words, the inbound threads process the inbound messages that are coming into the B2B server from an external endpoint. Similarly, the outbound threads processes the messages that are sent out from the B2B server. The default threads are responsible for certain B2B server-specific special tasks. In case the inbound and outbound thread counts are not specified, the default thread count also dictates the total number of inbound and outbound threads. As found in any tuning exercise, the optimisation of these threads is usually reached via an iterative process. The best working combination of the thread counts are directly related to the system infrastructure, traffic load and several other environmental factors.

    Read the article

  • Setup for mounting kerberized nfs home directory - gssd not finding valid kerberos ticket

    - by janm
    Our home directories are exported via kerberized nfs, so the user needs a valid kerberos ticket to be able to mount its home. This setup works fine with our existing clients & server. Now we want to add some 11.10 client and thus set up ldap & kerberos together with pam_mount. The ldap authentication works and users can login via ssh, however their homes can not be mounted. When pam_mount is configured to mount as root, gssd does not find a valid kerberos ticket and the mount fails. Nov 22 17:34:26 zelda rpc.gssd[929]: handle_gssd_upcall: 'mech=krb5 uid=0 enctypes=18,17,16,23,3,1,2 ' Nov 22 17:34:26 zelda rpc.gssd[929]: handling krb5 upcall (/var/lib/nfs/rpc_pipefs/nfs/clnt2) Nov 22 17:34:26 zelda rpc.gssd[929]: process_krb5_upcall: service is '<null>' Nov 22 17:34:26 zelda rpc.gssd[929]: getting credentials for client with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' being considered, with preferred realm 'PURPLE.PHYSCIP.UNI-STUTTGART.DE' Nov 22 17:34:26 zelda rpc.gssd[929]: CC file '/tmp/krb5cc_65678_Ku2226' owned by 65678, not 0 Nov 22 17:34:26 zelda rpc.gssd[929]: WARNING: Failed to create krb5 context for user with uid 0 for server purple.physcip.uni-stuttgart.de Nov 22 17:34:26 zelda rpc.gssd[929]: doing error downfall When pam_mount is on the other hand configured with the noroot=1 option, then it cannot mount the volume at all. Nov 22 17:33:58 zelda sshd[2226]: pam_krb5(sshd:auth): user phy65678 authenticated as [email protected] Nov 22 17:33:58 zelda sshd[2226]: Accepted password for phy65678 from 129.69.74.20 port 51875 ssh2 Nov 22 17:33:58 zelda sshd[2226]: pam_unix(sshd:session): session opened for user phy65678 by (uid=0) Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:69): Messages from underlying mount program: Nov 22 17:33:58 zelda sshd[2226]: pam_mount(mount.c:73): mount: only root can do that Nov 22 17:33:58 zelda sshd[2226]: pam_mount(pam_mount.c:521): mount of /Volumes/home/phy65678 failed So how can we allow users of a specific group to perform nfs mounts? If this does not work, can we make pam_mount use root but pass the correct uid?

    Read the article

  • How to partition Seagate FreeAgent GoFlex 2TB hard disk?

    - by balki
    Hi I bought a new Seagate 2TB external hard disk. I opened the drive's application in my virtual windows, did product registration using the application present in it. I have few questions on how best to use it. The drive by default has some files and folders - setup.exe, System Volume Information, USB 3.0 PC Card Adapter etc,. I copied all the files to my laptop. Is it safe to delete these files? It has a dash board for windows which allows to tune power options, test the drive etc. Will I be able to use the dash board if I put back all these files and mount on windows again? I want to partition and format the hard disk. Data I like to store is Around 10 to 20 GB Files - Virtual box images. Around 4GB Files - Dvd images. Other Movies and personal Files. What is the best filesystem to store very huge files like 10 to 20GB files. So that they are written and accessed fast also best uses the drive's capacity. If I leave one of the partition as ntfs and others to different files systems, will it be able to mount on windows and Will I be able to use the device's dash board? Note: I dont need any encryption for my data. Any other advice on using the hard disk is also welcome.

    Read the article

  • Where to find and install ASUS motherboard drivers for Linux

    - by Dan
    This is my second day ever with Linux, and I had one heck of a time getting the nVidia drivers installed and working. Please, keep in mind I am very new and just starting out. I currently have an ASUS P8Z68-V LE motherboard and I'm not sure if the drivers are installed. Where would I go to find that out? I am using Gnome as my UI. If I don't have the drivers installed, where would I go? The ASUS site only gives me options to download for various Windows OS, DOS and "other" (in .ROM format). Which should I take and how should I install? I'm mostly looking for audio drivers. A lot of music I play, either on YouTube or with VLC has a faint crackling in the background on Ubuntu, which gets much worse the higher I turn the volume up. Could this be something other than the drivers? I doubt it's the hardware since the sound seems fine on Windows. I am currently running 12.04.

    Read the article

  • How to facilitate code reviews in a small team for embedded software?

    - by Adam Lewis
    Short Question Does a cost-effective tool / workflow exist to facilitate code reviews in a small team? More specifically, a small team that relies on post-commit code reviews. Background Our team currently consists of 3 full time and 1 part time software engineers, with plans on hiring more in the near future. Due to our team size and volume of projects we all must juggle, the pre-commit workflow that major tools (such as Review Board and Code Collaborator) use is not obtainable for us right now. The best we can do at the moment is to perform post-commit reviews before major releases or as time permits. Nearly all of our projects are hosted on RepositoryHosting.com (which I highly recommend) and contain a mixture of SVN and GIT repositories. Current Thoughts Since I cannot find a tool that fits our needs right now, I am turning to TRAC that is built into our repository's site. At the moment we use TRAC to file tickets and track milestones, so to me this seems like a natural fit for code review results as well. The direction I am heading in right now is to use a spread sheet(s) to log all of the bugs and comments. Do some macro magic to get it in a format that I can use TRAC's import ticket method and use TRAC's ticketing system to create the action items / bug reports automatically. The auto ticket generation is darn near a must have, adding in bugs and comments one at a time from a web-gui is really painful. Secondary Question If this workflow makes sense, is there a good / standard template to use as a code review log?

    Read the article

  • Demo on Data Guard Protection From Lost-Write Corruption

    - by Rene Kundersma
    Today I received the news a new demo has been made available on OTN for Data Guard protection from lost-write corruption. Since this is a typical MAA solution and a very nice demo I decided to mention this great feature also in this blog even while it's a recommended best practice for some time. When lost writes occur an I/O subsystem acknowledges the completion of the block write even though the write I/O did not occur in the persistent storage. On a subsequent block read on the primary database, the I/O subsystem returns the stale version of the data block, which might be used to update other blocks of the database, thereby corrupting it.  Lost writes can occur after an OS or storage device driver failure, faulty host bus adapters, disk controller failures and volume manager errors. In the demo a data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage. When a primary database lost write corruption is detected by a Data Guard physical standby database, Redo Apply (MRP) will stop and the standby will signal an ORA-752 error to explicitly indicate a primary lost write has occurred (preventing corruption from spreading to the standby database). Links: MOS (1302539.1). "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration" Demo MAA Best Practices Rene Kundersma

    Read the article

  • How to mount an external HDD?

    - by Slash
    I have Ubuntu Linux 12.04 version the latest right now.I want to mount an external HDD NTFS 1TB.I have followed many guides but still no success.The error I'm getting is this: Failed to read last sector (1953523119): Invalid argument HINTS: Either the volume is a RAID/LDM but it wasn't setup yet, or it was not setup correctly (e.g. by not using mdadm --build ...), or a wrong device is tried to be mounted, or the partition table is corrupt (partition is smaller than NTFS), or the NTFS boot sector is corrupt (NTFS size is not valid). Failed to mount '/dev/sdb1': Invalid argument The device '/dev/sdb1' doesn't seem to have a valid NTFS. Maybe the wrong device is used? Or the whole disk instead of a partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around? Using Storage Device MAnager i get this error:Error mounting: mount exited with exit code 1: helper failed with: mount: only root can mount /dev/sdb1 on /media/Skliros_Diskos {external disk name} When I use sudo fdisk -l, this is the output: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e0bc6 Device Boot Start End Blocks Id System /dev/sda1 * 2048 618854399 309426176 83 Linux /dev/sda2 618856446 625141759 3142657 5 Extended /dev/sda5 618856448 625141759 3142656 82 Linux swap / Solaris Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002093a Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 7 HPFS/NTFS/exFAT

    Read the article

  • Spirent Communications Improves Customer Experience with Knowledge Management

    - by Tony Berk
    Spirent Communications plc is a global leader in test and measurement inspiring innovation within development labs, communication networks and IT organizations. The world’s leading communications companies rely on Spirent to help design, develop, validate, and deliver world-class network, devices, and services. Spirent’s customers require high levels of support for a diverse and complex product portfolio, and the company is committed to delivering on this requirement. Spirent needed a solution to help its customers get the information they need quickly and at their convenience through its Web site. After evaluating several solutions, Spirent selected and deployed Oracle Knowledge for Web Self Service Enterprise Edition. Oracle Knowledge Management uses natural language processing to understand the true intent of each inquiry logged via the support portal’s search function. The Spirent Knowledge Base on the company’s Customer Support Center (CSC) finds the best possible answer using search enhancement features?such as communications industry-specific libraries and federation to search external sources. Spirent has reduced contact center call volume while better serving its customers. Each time a customer uses the knowledge base, they find answers faster than by calling, and it saves Spirent an average of US$210 per call?which is significant when multiplied across the thousands of calls received monthly. Oracle Knowledge also helps support engineers find answers more quickly, enabling the company to scale without adding additional support engineers. Oracle Knowledge is integrated with Spirent's Siebel Contact Center implementation to provide an integrated desktop for CRM and agent intelligence, avoiding the need for contact center personnel to toggle between various screens to address customer inquiries, thereby accelerating customer service. Click here to learn more about Sprient's use of Siebel CRM and Oracle Knowledge Management.

    Read the article

  • Coarse Collision Detection in highly dynamic environment

    - by Millianz
    I'm currently working a 3D space game with A LOT of dynamic objects that are all moving (there is pretty much no static environment). I have the collision detection and resolution working just fine, but I am now trying to optimize the collision detection (which is currently O(N^2) -- linear search). I thought about multiple options, a bounding volume hierarchy, a Binary Spatial Partitioning tree, an Octree or a Grid. I however need some help with deciding what's best for my situation. A grid seems unfeasible simply due to the space requirements and cache coherence problems. Since everything is so dynamic however, it seems to be that trees aren't ideal either, since they would have to be completely rebuilt every frame. I must admit I never implemented a physics engine that required spatial partitioning, do I indeed need to rebuild the tree every frame (assuming that everything is constantly moving) or can I update the trees after integrating? Advice is much appreciated - to give some more background: You're flying a space ship in an asteroid field, and there are lots and lots of asteroids and some enemy ships, all of which shoot bullets. EDIT: I came across the "Sweep an Prune" algorithm, which seems like the right thing for my purposes. It appears like the right mixture of fast building of the data structures involved and detailed enough partitioning. This is the best resource I can find: http://www.codercorner.com/SAP.pdf If anyone has any suggestions whether or not I'm going in the right direction, please let me know.

    Read the article

  • CPU Wars Is a Trump-Style Card Game Driven by Chip Stats

    - by Jason Fitzpatrick
    If you’re looking for the geekiest card game around, you’d be hard pressed to beat CPU Wars–a top-trumps card game built around CPU specs. From the game’s designers: CPU Wars is a trump card game built by geeks for geeks. For Volume 1.0 we chose 30 CPUs that we believe had the greatest impact on the desktop history. The game is ideally played by 2 or 3 people. The deck is split between the players and then each player takes a turn and picks a category that they think has the best value. We have chosen the most important specs that could be numerically represented, such as maximum speed achieved and maximum number of transistors. It’s lots of fun, it has a bit of strategy and can be played during a break or over a coffee. If you’re interested, you can pick up a copy for £7.99 (roughly $12.50 USD). Hit up the link below for more information. How To Customize Your Wallpaper with Google Image Searches, RSS Feeds, and More 47 Keyboard Shortcuts That Work in All Web Browsers How To Hide Passwords in an Encrypted Drive Even the FBI Can’t Get Into

    Read the article

  • SharePoint 2010 and Windows Server Backup

    - by Enrique Lima
    A couple of months ago, a friend found a bit of information on TechNet that has proven to be quite useful. See, I am of the opinion SharePoint allows for smaller deployments to be made, and with that said, I am talking about SharePoint Foundation 2010 being used for the most part. But truly the point here is not to discuss whether or not a deployment of SharePoint Foundation 2010 or SharePoint Server 2010 is right or not.  The fact is they do take place and happen.  And information will reside there. Now, the point of this post is to raise awareness on options available for companies that have implemented it and maybe are a bit “iffy” on how to protect the information being placed in libraries and lists.  In many cases I have found SharePoint comes first and business continuity becomes an afterthought.  The documentation piece from TechNet states: “You can register SharePoint Server 2010 with Windows Server Backup by using the stsadm.exe -o -registerwsswriter operation to configure the Volume Shadow Copy Service (VSS) writer for SharePoint Server. Windows Server Backup then includes SharePoint Server 2010 in server-wide backups. When you restore from a Windows Server backup, you can select Microsoft SharePoint Foundation (no matter which version of SharePoint 2010 Products is installed), and all components reported by the VSS writer forSharePoint Server 2010 on that server at the time of the backup will be restored. Windows Server Backup is recommended only for use with for single-server deployments.” Even in the event of single-server deployments you will have options to safeguard your data. The process will require that after you have executed the stsadm command above, you will then use Windows Server Backup to do a Full Server Backup.  Then when the restore operation is needed you will be able to select specifically the section that has the SharePoint technologies backup. The restore process: Hope you find this to be a helpful post.  I have found this to be specially handy in SharePoint deployments that are part of a Team Foundation Server deployment and that are isolated from any other SharePoint farm and such.   Credits:  Sean McDonough for passing along the information available on TechNet.

    Read the article

  • Why do my gvfs mounts not show up under ~/.gvfs?

    - by kynan
    From what I read, when mounting a network share via nautilus or gvfs-mount the mount point should be in ~/.gvfs. This seems not to be the case for me: I tried mounting both an FTP and SMB share via both nautilus and gvfs-mount under both Ubuntu Maverick and Natty and in none of the cases did I see any mount point under ~/.gvfs. I can access the shares just find in nautilus, but I want to have access via the command line, which is why I need a mount point in the file system. Edit: Debugging following James Henstridge's answer and enzotib's comment revealed that on my laptop gvfs-fuse-daemon is running and consequently gvfs mounts show up in ~/.gvfs, whereas on the 2 workstations where ~/.gvfs remained empty gvfs-fuse-daemon was not running. On all 3 machines there are other gvfs processes running: gvfsd, gvfs-afc-volume-monitor, ... On the laptop, mount | fgrep gvfs yields gvfs-fuse-daemon on /home/xxx/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=xxx) That raises the questions: How are shares mounted without gvfs-fuse-daemon running? Is there no mount point created in that case and is every access to the share a gvfs library call? Which daemon is responsible? gvfsd? What's the role of gvfs-fuse-daemon? Does it only create a fuse mount point in ~/.gvfs?

    Read the article

  • The Oldest Big Data Problem: Parsing Human Language

    - by dan.mcclary
    There's a new whitepaper up on Oracle Technology Network which details the use of Digital Reasoning Systems' Synthesys software on Oracle Big Data Appliance.  Digital Reasoning's approach is inherently "big data friendly," as it leverages multiple components of the Hadoop ecosystem.  Moreover, the paper addresses the oldest big data problem of them all: extracting knowledge from human text.   You can find the paper here.   From the Executive Summary: There is a wealth of information to be extracted from natural language, but that extraction is challenging. The volume of human language we generate constitutes a natural Big Data problem, while its complexity and nuance requires a particular expertise to model and mine. In this paper we illustrate the impressive combination of Oracle Big Data Appliance and Digital Reasoning Synthesys software. The combination of Synthesys and Big Data Appliance makes it possible to analyze tens of millions of documents in a matter of hours. Moreover, this powerful combination achieves four times greater throughput than conducting the equivalent analysis on a much larger cloud-deployed Hadoop cluster.

    Read the article

  • Grub loading unknown file system grub rescue

    - by Ashish Kumar
    I am new to Linux. I had Windows 7 installed in my laptop. Yesterday I installed Ubuntu 9.0 from a free CD I got. I choose the option "Install side by side with Windows 7 loader (something like that..)". Everything went fine & I installed Ubuntu. When I restarted, I could easily boot from both Windows 7 & Ubuntu. But later when I booted from Windows 7, I did a disk management by right clicking "my Computer" & choosing the option "Manage". Actually I deleted a drive & added that empty volume in another drive. Now when I start my system it shows the following error: Grub loading Unknown file system grub rescue> I have no clue what to do now. Please help me at urgent. Any solution in layman's language is greatly appreciated. P.S. I want all my files on both Windows 7 & Ubuntu.

    Read the article

  • Install VirtualBox Guest Additions "Feisty Fawn"

    - by codebone
    I am trying to install the virtualbox guest additions into a feisty fawn (7.04) VM. The problem I am running into is that when I Click install guest additions, or place the iso in the drive manually, the disk never shows up in the machine. I even manually mounted and went to the mounted directory and it was empty. When using the file browser, doubling clicking on the cdrom yields Unable to mount the selected volume. mount: special device /dev/hdc does not exist. Secondly, I tried installing via repository... but according to the system, I have no internet connection. But, I can browse the web using firefox just fine (in the guest). Here is my /etc/network/interfaces file... code auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp I also tryed setting a static ip... auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.253 netmask 255.255.255.0 gateway 192.168.1.1 The reason I am trying to get this particular installation running is because it is part of a book, "Hacking: The Art of Exploitation" (Jon Erickson), and is fully loaded wilh tools and such to go along exactly with the book. I appreciate any effort into finding a solution for this! Thanks

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >