Search Results

Search found 65558 results on 2623 pages for 'large data'.

Page 37/2623 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Good Practices for development team in large projects

    - by Moshe Magnes
    Since I started learning C a few years ago, I have never been a part of a team that worked on a project. Im very interested to know what are the best practices for writing large projects in C. One of the things i want to know, is when (not how) do I split my project into different source files. My previous experience is with writing a header-source duo (the functions defined in the header are written in the source). I want to know what are the best practices for splitting a project, and some pointers on important things when writing a project as part of a team.

    Read the article

  • Efficient visualization of a large voxelized volume

    - by Alejandro Piad
    Lets consider a large voxelized volume stored in an oct-tree or any other convenient structure. This volume represents, for instance, a landscape, where each block is either empty (air), or it has an specific material that will be later used to apply a texture. Voxels that are next to each other represent connected sections of the surface. What I need is an algorithm to generate a mesh from this voxels that represents the volume, with the following caracteristics: All the "holes" in the voxelized volume are correct. All the connections are correct, i.e. seamless. The surface appears smooth. In a broad sense, I want to somehow preserve the surface topology, meaning that connected sections remain connected in the resulting mesh and that the surface has a curvature that responds to the voxels topology. Imagine trying to render the Minecraft world but getting the mountain ladders to be smooth instead of blocky.

    Read the article

  • Closing the Gap: 2012 IOUG Enterprise Data Security Survey

    - by Troy Kitch
    The new survey from the Independent Oracle Users Group (IOUG) titled "Closing the Security Gap: 2012 IOUG Enterprise Data Security Survey," uncovers some interesting trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. "Despite growing threats and enterprise data security risks, organizations that implement appropriate detective, preventive, and administrative safeguards are seeing significant results," finds the report's author, Joseph McKendrick, analyst, Unisphere Research. Produced by Unisphere Research and underwritten by Oracle, the report is based on responses from 350 IOUG members representing a variety of job roles, organization sizes, and industry verticals. Key findings include Corporate budgets increase, but trailing. Though corporate data security budgets are increasing this year, they still have room to grow to reach the previous year’s spending. Additionally, more than half of respondents say their organizations still do not have, or are unaware of, data security plans to help address contingencies as they arise. Danger of unauthorized access. Less than a third of respondents encrypt data that is either stored or in motion, and at the same time, more than three-fifths say they send actual copies of enterprise production data to other sites inside and outside the enterprise. Privileged user misuse. Only about a third of respondents say they are able to prevent privileged users from abusing data, and most do not have, or are not aware of, ways to prevent access to sensitive data using spreadsheets or other ad hoc tools. Lack of consistent auditing. A majority of respondents actively collect native database audits, but there has not been an appreciable increase in the implementation of automated tools for comprehensive auditing and reporting across databases in the enterprise. IOUG RecommendationsThe report's author finds that securing data requires not just the ability to monitor and detect suspicious activity, but also to prevent the activity in the first place. To achieve this comprehensive approach, the report recommends the following. Apply an enterprise-wide security strategy. Database security requires multiple layers of defense that include a combination of preventive, detective, and administrative data security controls. Get business buy-in and support. Data security only works if it is backed through executive support. The business needs to help determine what protection levels should be attached to data stored in enterprise databases. Provide training and education. Often, business users are not familiar with the risks associated with data security. Beyond IT solutions, what is needed is a well-engaged and knowledgeable organization to help make security a reality. Read the IOUG Data Security Survey Now.

    Read the article

  • Downloading large files hangs system

    - by Igor
    When Im trying to download large files, i.e. 1gb or more under FireFox, first of all it starts with very big download speed and in few seconds in almost get up to max (~11 MBps). It is downloading very fast, but when downloaded size becomes near 700-800mb and more, my system almost completely hangs, so I can do nothing - I just have to wait until it finishes downloading. Also when it hangs, I can't see the download progress - it looks like it completely hangs. Sometimes, however, if the file size is near 1gb, the system comes back from hang, finishing download, but sometimes I just cant wait before system comes back and have to kill FF from top (it takes me 2 minutes to do this, because of very slow system performance). I use Firefox as primary browser. If I use wget with direct link to file - everything is fine. Speed at max, no performance decrease. So what can I do?

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • Good Practices for writing large (team/solo) projects (In C)

    - by Moshe Magnes
    Since I started learning C a few years ago, I have never been a part of a team that worked on a project. Im very interested to know what are the best practices for writing large projects in C. One of the things i want to know, is when (not how) do I split my project into different source files. My previous experience is with writing a header-source duo (the functions defined in the header are written in the source). I want to know what are the best practices for splitting a project, and some pointers on important things when writing a project as part of a team.

    Read the article

  • Video Presentation and Demo of Oracle Advanced Analytics & Data Mining

    - by Mike.Hallett(at)Oracle-BI&EPM
    For a video presentation and demonstration of Oracle Advanced Analytics & Data Mining  click here. (This plays a large MP4 file in a browser: access is from Google.docs, and this works best with Google CHROME). This one hour session focuses primarily on the Oracle Data Mining component of the Oracle Advanced Analytics Option along with Oracle R Enterprise and is tied to the Oracle SQL Developer Days virtual and onsite events and is presented by Oracle’s Director for Advanced Analytics, Charlie Berger, covering: Big Data + Big Data Analytics Competing on analytics & value proposition What is data mining? Typical use cases Oracle Data Mining high performance in-database SQL based data mining functions Exadata "smart scan" scoring Oracle Data Miner GUI (an Extension that ships with SQL Developer) Oracle Business Intelligence EE + Oracle Data Mining results/predictions in dashboards Applications "powered by Oracle Data Mining" for factory installed predictive analytics methodologies Oracle R Enterprise Please contact [email protected] should you have any questions. 

    Read the article

  • New Feature in ODI 11.1.1.6: Enterprise Data Quality Integration

    - by Julien Testut
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Data Integrator 11.1.1.6.0 introduces a new Open Tool called EnterpriseDataQuality which allows ODI users to invoke an Oracle Enterprise Data Quality Job from a Package. This post will give you an overview of this new feature. Oracle Enterprise Data Quality (OEDQ) provides organizations with an integrated suite of data quality tools that offer an end-to-end solution to measure, improve, and manage the quality of data from any domain, including customer and product data. The addition of the EnterpriseDataQuality Open Tool extends the inline Data Quality capabilities of Oracle Data Integrator with Oracle Enterprise Data Quality powerful data profiling, cleansing, matching, and monitoring capabilities. The EnterpriseDataQuality Open Tool can invoke any OEDQ Job stored in a Project. This Open Tool connects to an OEDQ server using a JMX (Java Management Extensions) interface. Once installed, this Open Tool will be found under Plugins in the Package Toolbox area: This EnterpriseDataQuality Open Tool takes a couple of parameters as inputs such as the Enterprise Data Quality Job and Project names, the OEDQ hostname and JMX port etc. With the EnterpriseDataQuality Open Tool, ODI customers can now incorporate their Oracle Enterprise Data Quality processes within their Data Integration workflows. You will find instructions about how to use the Enterprise Data Quality Open Tool in the Oracle Data Integrator documentation at: Using the EnterpriseDataQuality Open Tool.You can find an overview of all the new features introduced in ODI 11.1.1.6 in the following document: ODI 11.1.1.6 New Features Overview.

    Read the article

  • Web services, Java EE, Spring, DB integration project ideas - maybe data mining related?

    - by saral jain
    I am a graduate Computer Science student (Data Mining and Machine Learning) and have good exposure to core Java (3 years). I have read up on a bunch of stuff on the following topics: Design patterns, Java EE Web services (SOAP and REST), Spring, and Hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff -- over my free time of course -- to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + SVN, maven). Any good project ideas would be really appreciated. I just want to build this stuff to learn, so I don't really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus as it fits with my research but is absolutely not necessary since this project is more to learn to do large scale software development.

    Read the article

  • How to recover data from a drive that was erased during the Ubuntu installation?

    - by user110353
    Yesterday I had two NTFS partitions on my drive. One on which Windows 7 was installed (C Drive) and the other contained my data (D Drive). During Ubuntu installation I choose to install Ubuntu and erase my existing OS. When Ubuntu installed, I was shocked to see no partition. All my data was gone. I must have done something wrong in selecting my option during installation. Is there any way I can recover my D Drive? Thanks in advance.

    Read the article

  • Strategy for building an application to replace a large spreadsheet

    - by Dan Walmsley
    I'm working on an application that is going to replace a rather large spreadsheet. The spreadsheet is used to budget purchases and things like that. It is the largest spreadsheet I have ever seen, and it required a lot of manual data entry, so this application is going to automate much of that. But as I'm working on this I've noticed its slow going. And I got to thinking this must be a common thing to do many companies will start with something like a spreadsheet, then when they get too big to maintain that, they will get a custom application built. So is there anything out there ( a framework or similar ) that does this sort of thing, migrating a spreadsheet to a custom application. I've had a quick Google but not really seen the kind of thing that I'm looking for. It's too late for this project, but I thought it would be worth having a look for next time. How do you guys tackle this problem?

    Read the article

  • Apply WCF For Large Projects

    - by svlytns
    We have a large projects that have nearly 20 modules on it.We want to use WCF for business layer. We think three way to implement WCF our project: Use only one datacontract and one operation contract. Send ClassName, MethodName to operation and create class by reflaction then invoke the method in WCF side. Second way put all modules in one wcf application, and create their data contracts, operation contracts. Third way is create seperate wcf application for each module and host them seperatly. Which one is the best way? I need your ideas. TIA!

    Read the article

  • Move Data into the grid for scalable, predictable response times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. It is still in its limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.ph , or send your questions to [email protected]. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: CloudTran,data grid,M,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Best way to do large XNA animations?

    - by Harold
    What's the best way to have large animations in XNA 4.0? I have created a spritesheet with the sprite being 250x400 (more of an image than a sprite but hey ho) and there are approximately 45 frames in the animation. This causes problems for XNA as it says that the maximum filesize for Reach is 2048. I'd rather not change to hidef as I heard that means that your game is less compatible with some computers and systems so does anyone have any idea what the best thing I could do is? The only thing I could come up with is to have a list of textures to flick through but that's not ideal.

    Read the article

  • Drawing large 2D sidescroller level terrain

    - by Yar
    I'm a relatively good programmer but now that it comes to add some basic levels to my 2D game I'm kinda stuck. What I want to do: An acceptable, large (8000 * 1000 pixels) "green hills" test level for my game. What is the best way for me to do this? It doesn't have to look great, it just shouldn't look like it was made in MS paint with the line and paint bucket tool. Basically it should just mud with grass on top of it, shaped in some form of hills. But how should I draw it, I can't just take out the pencil tool and start drawing it pixel per pixel, can I?

    Read the article

  • Fusion CRM Data Integration and Migration from Conemis (D)

    - by Richard Lefebvre
    Conemis Data Integration Tools edited for Oracle Fusion CRM offers easy-to-use and pre-configured tools for data integration, data quality, and migration of data from Oracle CRM on Demand and third-party applications to Oracle Fusion CRM Conemis solution includes: Pressure Fueling of data for Fusion CRM Migration covered from legacy to Fusion CRM Data Quality in migration and integration Intuitive Data Housekeeping for IT and Sales Backups of Fusion CRM environments Conemis's solution benefits include Fusion CRM integrated out-of-the-box, connection to other applications, ready-made data mapping, instant availability without installation, fully configurable, shared use in integration expert groups, one GUI for several environments/pods, reduced costs & risks in migration projects, etc. Conemis AG, a German-based data integration company founded in 2009, offers Software and services solution and expertize for Oracle CRM products's data migration and integration. For more details, please contact Dr. Daniel Rolli ([email protected]) www.conemis.com.

    Read the article

  • Moving large website to new CMS - URL changes

    - by herrherr
    Hi, I was wondering if you have any tipps on the following situation. I'm going to move a large website to a new Content Management System, here are some details on the site: online news magazine with roughly 3,000 articles domain age: 10 years online in the current form since May 2010 indexed pages: ~10.000 percent of search engine traffic: under 10% Unfortunately a custom-tailored CMS was used for the site. The performance, reliability and SEO capabilites have been really bad, so we are moving to a new and proven open source CMS. All the articles will be kept as they are, but the URL structure as well as the structure of the HTML templates will be changed. What I wanted to do now is to actually create 301 redirects for all articles from the old to the new schema, i.e: Old: www.example.com/en/html/news/detail/title-of-the-article/ New: www.example.com/category/title-of-article.html Is this a proven way to do something like this? If not, can you recommend a way that has worked for you? Thanks :)

    Read the article

  • web services, J2EE, spring, DB integration project ideas- maybe data mining related?

    - by sj88
    Hey guys, I am a graduate CS student (Data mining and machine learning) and have a good exposure to core JAVA (3 years). I have read up a bunch of stuff on Design patterns J2EE Web services( soap and rest) spring and hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff (over my free time of corse) to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + svn, maven). Any good project ideas would be really appreciated. I just wanna build this stuff to learn so I dont really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus (fits with my research) but absolutly not necesary (since this project is more to learn to do large scale software developement)

    Read the article

  • Handling cameras in a large scale game engine

    - by Hannesh
    What is the correct, or most elegant, way to manage cameras in large game engines? Or should I ask, how does everybody else do it? The methods I can think of are: Binding cameras straight to the engine; if someone needs to render something, they bind their own camera to the graphics engine which is in use until another camera is bound. A camera stack; a small task can push its own camera onto the stack, and pop it off at the end to return to the "main" camera. Attaching a camera to a shader; Every shader has exactly one camera bound to it, and when the shader is used, that camera is set by the engine when the shader is in use. This allows me to implement a bunch of optimizations on the engine side. Are there other ways to do it?

    Read the article

  • How can I store all my level data in a single file instead of spread out over many files?

    - by Jon
    I am currently generating my level data, and saving to disk to ensure that any modifications done to the level are saved. I am storing "chunks" of 2048x2048 pixels into a file. Whenever the player moves over a section that doesn't have a file associated with the position, a new file is created. This works great, and is very fast. My issue, is that as you are playing the file count gets larger and larger. I'm wondering what are techniques that can be used to alleviate the file count, without taking a performance hit. I am interested in how you would store/seek/update this data in a single file instead of multiple files efficiently.

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • What is the most complicated data structure you have used in a practical situation?

    - by Fanatic23
    The germ for this question came up from a discussion I was having with couple of fellow developers from the industry. Turns out that in a lot of places project managers are wary about complex data structures, and generally insist on whatever exists out-of-the-box from standard library/packages. The general idea seems to be like use a combination of whats already available unless performance is seriously impeded. This helps keeping the code base simple, which to the non-diplomatic would mean "we have high attrition, and newer ones we hire may not be that good". So no bloom filter or skip-lists or splay trees for you CS junkies. So here's the question (again): Whats the most complicated data structure you did or used in office? Helps get a sense of how good/sophisticated real world software are.

    Read the article

  • How can I protect my save data from casual hacking?

    - by Danran
    What options are there for saving game data in a secure manner? I'm interested in solutions specifically tailored for C++. I'm looking for something that is fast and easy to use. I'm only concerned about storing simple information such as Which levels are and are not unlocked The user's score for each level I'm curious again to know what's out there to use, any good libraries to use that give me nice, secure game data files that the average player can't mess with. I just found this here which looks very nice, but it would be great to get some opinions on potential other libraries/options out there.

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • How to speed up rsync/tar of large Maildir

    - by psusi
    I have a very large Maildir I am copying to a new machine ( over 100 BaseT ) with rsync. The progress is slow. VERY SLOW. Like 1 MB/s slow. I think this is because it is a lot of small files that are being read in an order that essentially is random with respect to where the blocks are stored on disk, causing a massive seek storm. I get similar results when trying to tar the directory. Is there a way to get rsync/tar to read in disk block order, or otherwise overcome this problem?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >