Search Results

Search found 31421 results on 1257 pages for 'software performance'.

Page 435/1257 | < Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >

  • Yes to NoSQL

    There seems to be some backlash building up against NoSQL with posts like Ted Dziuba I Can't Wait for NoSQL to Die or Dennis Forbes The Impact of SSDs on Database Performance and the Performance Paradox of Data Explodification (aka Fighting the NoSQL mindset). These are interesting articles to read and yes RDBMSs are not going the way of the dodo yet (I even said that in The RDBMS is dead, which by the way, was written before NoSQL was coined, but I digress ). Nevertheless,...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Make a socket as an user but make it readable and writable by another

    - by user1598585
    I have a software that is run under user A, this software creates a socket in /sockets and the socket should be readable and writable by user B. I have tried setting the directory to have ownership A:A or A:B but when user A creates the socket, it ends up with uid A and gid A. Using ACLs has not helped so far, the default mask is preventing the rights to be effective. rw permisions for B will always turn into jusr r. If what I make is not a socket it will work fine. How can I best accomplish this task? (It is for a web-server where the web-application makes the socket and the web-server software forwards requests to it)

    Read the article

  • Should I be looking for developers with specific skill sets or generalists that need to learn?

    - by Lostsoul
    Thanks to the great help of this site and SO, I've been able to make a prototype of a software I want to sell but unfortunately although the prototype works I think my code quality is very low. I didn't use much OOP or design patterns so although my code is understandable to me, I think a normal developer would faint if they had to read it. So I wanted to hire a developer to make it a bit more better quality and improve some of my implementations of API's that I may have not done correctly. I'm having problems hiring a developer though. I have met 2 developers and had them read my software specs.The problem is, they lacked my business's domain knowledge(which is completely understandable and no biggie) but they also lacked knowledge of the underlying tech systems I used such as Hadoop, Hbase, Cuda, etc..I spent alot of time explaining map/reduce, bigtables and other technologies I used. I thought it was common knowledge because of my interactions with people on this site but the people I met with mentioned they never had to deal with these things so they didn't know it. My question is, for software projects that are hiring contractor developers is it a danger if the developer does not have experience with the underlying technologies? or can a general developer who is accomplished in another area realistically pick up new technologies? I did a very very quick back of envelope calculation and I think the upfront costs would be similar if I hire a student or developer with no experience in my technologies who will work many hours versus hiring a highly experienced developer who charges double but finishes in half the time but what other risks should I be considering or worried about? Also, should if I do hire a generalist, should I be paying for the time it takes them to learn hadoop or cuda if they are contractors(seems to make business sense but not sure how fair it is to them if they do not use the skill again). I'm a bit confused so any suggestions would be great.

    Read the article

  • Faking Display tree (Sprite) parent child relationships with rasters (BitmapData) in ActionScript 3

    - by Arthur Wulf White
    I am working with Rasters (bitmapData) and blliting (copypixels) in a 2d-game in actionscript 3. I like how you can move a sprite and it moves all it's children, and you can simultaneously move the children creating an interesting visual effect. I do not want to use rotation or scaling however cause I do not know how that can be done without hampering with performance. So I'm not simulating Sprite parent-child behavior and sticking to the movement on the (x, y) axis. What I am planning to do is create a class called RasterContainer which extends bitmapData that has a vector of children of type Raster(extending bitmapData), now I am planning to implement recursive rendering in RasterContainer, that basically copyPixels every child, only changing their (x, y) offset to reflect their parent's offset. My question is, has this been implemented in an existing framework? Is this a good plan? Do I expect a serious performance hit for using recursive methods this way?

    Read the article

  • samsung series 5 A6 ultrabook

    - by Én Cube
    i'm a fan of ubuntu, but for now i want my installation to be perfect for my new laptop, its a series 5 of samsung with A6 processor. Somebody suggest what to do? or mention any problems related to this? Im more concern of the drivers to increase its performance because i am to develop an android app. I need "i dont know" to make this unit run as fast as possible without any dangerous tradeoffs. Also i read some forum saying that their units are overheating. -_- somebody help me please. and also i want to switch to gnome 3. And last thing, is there any differences in performance if i use alongside, replace or custom in formatting?

    Read the article

  • Eliminating Windows 7 user tracking registry writes

    - by caffiend
    Windows 7 continues the practice of saving user actions in the registry. I'd like to disable this practice both to avoid reg-file fragmentation and SSD wear, as well as being uncomfortable with programs being able to quickly analyze my usage habits. Even with the "Turn off user tracking" policy enabled, there are at least two areas that still contain user tracking: HKCU\Software\Classes\Local Settings\MuiCache This key stores a cache of most-recently accessed strings, including most-recently ran exe descriptions. MKCU\Software\Classes\Local Settings\Software\Microsoft Windows\Shell\BagMRU This directory stores the most recently viewed folders along with timestamps. Are there additional policy settings/registry entries to disable these writes? If not, is it possible to make these entries Volatile? Would it be practical to create a temporary hive (eg, on ramdisk) and map it over this location?

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • is Java free for mobile development?

    - by exTrace101
    Q1. I would like to know if it's free for a developer (I mean, if I have to pay no royalties to Sun/Oracle) to develop (Android) mobile apps in Java? After reading this snippet about use of Java field, I'm getting the impression that Java is not free for mobile development, is that right? .."General Purpose Desktop Computers and Servers" means computers, including desktop and laptop computers, or servers, used for general computing functions under end user control (such as but not specifically limited to email, general purpose Internet browsing, and office suite productivity tools). The use of Software in systems and solutions that provide dedicated functionality (other than as mentioned above) or designed for use in embedded or function-specific software applications, for example but not limited to: Software embedded in or bundled with industrial control systems, wireless mobile telephones, wireless handheld devices, netbooks, kiosks, TV/STB, Blu-ray Disc devices, telematics and network control switching equipment, printers and storage management systems, and other related systems are excluded from this definition and not licensed under this Agreement... and from http://www.excelsiorjet.com/embedded/ Notice : The Java SE Embedded technology license currently prohibits the use of Java SE in cell phones. Q2. how come these plethora of Android Java developers aren't paying Sun/Oracle a dime?

    Read the article

  • What are the differences between Bigloo and ECL from an embedding standpoint? [migrated]

    - by Pubby
    I've been looking to embed Lisp in some C++ code. Two options I'm interested in is Bigloo Scheme and ECL (Common Lisp). Reading through the docs they seem to support a very similar feature set. Obviously Bigloo is Scheme and ECL is CLisp, but what other differences do they have? In particular I'm interested in the following criteria: Ease of embedding (for C++, not just C). I don't want to write a bunch of boilerplate. Performance. Bigloo is performance based and has many compiler optimization options, although I can't find anything comparable for ECL. Style of coding. This one is more for Bigloo - is it more functional than ECL? I'm targeting this question towards someone who has used both.

    Read the article

  • Win Server 2003 - Task Scheduler - Tasks with GUI and Services

    - by august_month
    I need to run excel macro daily. I scheduled it with Windows Scheduler and it worked fine until I had to change my password. I wonder if it's possible to have a task scheduled without a password? As alternative we have third party scheduling software, but this software cannot launch excel. The tech support said that since excel has gui and scheduling software runs as service with "Allow to interact to Desktop" disabled, it cannot launch excel. Also tech support mentioned that "Allow to interact to Desktop" is not supported as of Vista. I totally trust tech support guy, I just need a work around that would make my network administrator and me happy. Regards.

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 13

    - by MarkPearl
    Learning Outcomes Explain the advantages of using a large number of registers Discuss the way in which compilers optimize register usage Discuss the evolution of CISC machines Describe the characteristics of RISC architecture Discuss the RISC vs. CISC controversy Describe the way in which RISC and CISC design principles can be combined Instruction Execution Characteristics To understand the the line of reasoning of RISC advocates, we need a brief overview of instruction execution characteristics. These include… Operations Operands Procedure Calls These three sections can be studied in depth in the textbook at pages 503 - 505 A number of groups have come up with the conclusion that the attempt to make the instruction set architecture closer to HLLs (High Level Languages) is not the most effective design strategy. Rather HLL’s can be best supported by optimizing performance of the most time-consuming features of typical HLL programs. Generally 3 main characteristics came up to improve performance… Use a large number of registers or use a compiler to optimize register usage Careful attention needs to be paid to the design of instruction pipelines A simplified (reduced) instruction set is indicated The use of a large register optimization One of the most important design principles of RISC machines is the use of a large number of registers. The concept of register windows and the use of a large register file versus the use of cache memory are discussed. On the face of it, the use of a large set of registers should decrease the need to access memory. The design task is to organize the registers in such a fashion that this goal is realized. Read page 507 – 510 for a detailed explanation. Compiler-based register optimization   Reduced Instructions Set Architecture There are two advantages to smaller programs… Because the program takes up less memory, there is a savings in that resource (this was more compelling when memory was more expensive) Smaller programs should improve performance, and this will happen in two ways – fewer instructions means fewer instruction bytes to be fetched and in a paging environment smaller programs occupy fewer pages, reducing page faults. Certain characteristics are common to RISC processors… One instruction per cycle Register-to-register operations Simple addressing modes Simple instruction formats RISC vs. CISC After initial enthusiasm for RISC machines, there has been a growing realization that RISC designs may benefit from the inclusion of some CISC features CISC designs may benefit from the inclusion of some RISC features

    Read the article

  • Copy Ubuntu distro with all settings from one computer to a different one

    - by theFisher86
    I'd like to copy my exact setup from my computer at work to my computer at home. I'm trying to figure out how to go about doing that. So far I've figured this much out. On the source computer run dpkg --get-selections > installed-software and backup the installed-software file Backup /etc/apt/sources.list Backup /usr/share/applications/ to save all my custom Quicklists Backup /etc/fstab to save all my network mounts Backup /usr/share/themes/ to save the customization I've done to my themes I'm also going to backup my entire HOME directory. Once I get to the destination computer I'm going to first do just a fresh install of 11.10 Then I'll copy over my HOME directory, /etc/apt/sources.list, /usr/share/appications, /etc/fstab and /usr/share/themes/ Then I'm going to run dpkg --set-selections < installed-software Followed by dselect That should install all of my apps for me. I'm wondering if there's a way/need to backup dconf and gconf settings from the source computer? I guess that's my ultimate question. I'd also like any notes on anything else that might need backed up as well before I undertake this project. I hope this post is legit, I figured other people would be interested in knowing this process and I don't see any other questions that seem to really document this on here. I'd also like to further this project and have each computer routinely backup all the necessary files so that both computer are basically identical at all times. That's stage 2 though...

    Read the article

  • My Oracle Support Accreditation for Database and Enterprise Manager

    - by A. G.
    Have you actively used My Oracle Support for 6-9 months? Take your expertise to the next level—become accredited! By completing the accreditation learning series, you can increase your proficiency with My Oracle Support’s core functions and build skills to help you leverage Oracle solutions, tools, and knowledge that enable productivity. Accreditation learning paths are available for Oracle Database and Enterprise Manager, which focus on product-specific best practices, recommendations, and tool enablement—up leveling your capabilities with these Oracle products. Course topics include:   Oracle Database Staying informed  Install Patching Upgrade Performance Security Scalability Enterprise Manager Staying informed  Supportability Certification Patching Upgrade Performance Diagnostic Tools Troubleshooting Visit the My Oracle Support Accreditation Index and get started with the Level 1 My Oracle Support Accreditation path and product-specific Level 2 learning paths for Oracle Database and Enterprise Manager.

    Read the article

  • how to avoid or minimise use of check/conditional statement?

    - by Muneeb Nasir
    I have scenario, where i got stream and i need to check for some value. if i got my any new value i have to store it in any of data structure. well it seems very easy, i can place conditional statement if-else or can use contain method of set/map to check either received is new or not. but the problem is checking will effect my application performance, in stream i'll receive hundreds for value in second, if i start checking each and every value i received than for sure it effect performance. Any body can suggest me any mechanism or algorithm that solve my issue. either by bypassing checks or atleast minimize them?

    Read the article

  • Oracle Exadata X3 Launch Webcast

    - by Cinzia Mascanzoni
    Available on-demand, this webcast covers everything your partners need to know about Oracle’s next-generation database machine. They will learn how to improve performance by storing multiple databases in memory, lower power and cooling costs by 30%, and easily deploy a cloud-based database service. Exadata X3 combines massive memory and low-cost disks to deliver the highest performance at the lowest cost. Partners won’t want to miss this webcast. Invite them to watch today! View and share the replay.

    Read the article

  • DOMDocument programming: a lot of little dilemmas, how to solve them?

    - by Peter Krauss
    I need elegance and performance: how to decide by the "best implementation" for each DOM algorithm that I face. This simple "DOMNodeList grouper" illustrate many little dilemmas: use iterator_to_array or "populate an array", when not all items need to be copied. use clone operator, cloneNode method or import method? use parentNode::method() or documentElement::method? (see here) first removeChild or first replaceChild, no avoids "side effects"? ... My position, today, is only "do an arbitrary choice and follow it in all implementations" (like a "Convention over configuration" principle)... But, there are another considerations? About performance, there are some article showing benchmarks? PS: this is a generic DOM question, any language (PHP, Javascript, Python, etc.) have the problem.

    Read the article

  • TCP 30 small packets per second flood connection with server

    - by Denis Ermolin
    I'm testing connection with flash client and cloud server(boost::asio for software) over TCP connection. My connection with server already is really poor - 120 ms ping in average. I found when i start to send packets with 2 bytes size (without tcp header) with speed 30 packets/s - ping grow to 170-200 average. I think that it's really bad and my bad connection and bad cloud provider is reason for this high ping without any load. What do you think? (I tested my software - it can compute about 50k small packets/s so software is not a problem). I measure my ping through flash client - send packet with timestamp and immediatly send from server to client.

    Read the article

  • How can I check myself when I'm the only one working on a project?

    - by Ricardo Altamirano
    I'm in between jobs in my field (unrelated to software development), and I recently picked up a temporary side contract writing a few applications for a firm. I'm the only person working on these specific applications. Are there ways I should be checking myself to make sure my applications are sound? I test my code, try to think of edge cases, generate sample data, use source control, etc. but since I'm the only person working on these applications, I'm worried I'll miss bugs that would easily be found in a team environment. Once I finish the application, either when I'm happy with it or when my deadline expires, the firm plans to use it in production. Any advice? Not to use a cliche, but as of now, I simply work "to the best of my ability" and hope that it's enough. Incidentally, I'm under both strict NDA's and laws about classified material, so I don't discuss the applications with friends who have actually worked in software development. (In case it's not obvious, I am not a software developer by trade, and even my experience with other aspects of information technology/computer science are limited and restrained to dabbling for the most part).

    Read the article

< Previous Page | 431 432 433 434 435 436 437 438 439 440 441 442  | Next Page >