Search Results

Search found 4396 results on 176 pages for 'low poly'.

Page 142/176 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • HERMES Medical Solutions Helps Save Lives with MySQL

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES Medical Solutions was established in 1976 in Stockholm, Sweden, and is a leading innovator in medical imaging hardware/software products for health care facilities worldwide. HERMES delivers a plethora of different medical imaging solutions to optimize hospital workflow. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} HERMES advanced algorithms make it possible to detect the smallest changes under therapies important and necessary to optimize different therapeutic methods and doses. Challenges Fighting illness & disease requires state-of-the-art imaging modalities and software in order to diagnose accurately, stage disease appropriately and select the best treatment available. Selecting and implementing a new database platform that would deliver the needed performance, reliability, security and flexibility required by the high-end medical solutions offered by HERMES. Solution Decision to migrate from in-house database to an embedded SQL database powering the HERMES products, delivered either as software, integrated hardware and software solutions, or via the cloud in a software-as-a-service configuration. Evaluation of several databases and selection of MySQL based on its high performance, ease of use and integration, and low Total Cost of Ownership. On average, between 4 and 12 Terabytes of data are stored in MySQL databases underpinning the HERMES solutions. The data generated by each medical study is indeed stored during 10 years or more after the treatment was performed. MySQL-based HERMES systems also allow doctors worldwide to conduct new drug research projects leveraging the large amount of medical data collected. Hospitals and other HERMES customers worldwide highly value the “zero administration” capabilities and reliability of MySQL, enabling them to perform medical analysis without any downtime. Relying on MySQL as their embedded database, the HERMES team has been able to increase their focus on further developing their clinical applications. HERMES Medical Solutions could leverage the Oracle Financing payment plan to spread its investment over time and make the MySQL choice even more valuable. “MySQL has proven to be an excellent database choice for us. We offer high-end medical solutions, and MySQL delivers the reliability, security and performance such solutions require.” Jan Bertling, CEO.

    Read the article

  • Video works with 'Try me' but not after install. What is the difference? U12.04LTS,

    - by HarveyP
    My hard drive got corrupted so I did a reinstall. Tested Youtube in FF during 'try me' and it worked - jerky, but it worked. Instal without all the updates (576 outstanding now) in order to get ff installed as per the demo - to no avail. In 'try me' mode ff NEVER crashed! After install ff crashed whilst I was typing in 'youtube' in the address field. When I finally got to youtube - no video. What is the difference between ff in try me and ff after install? Off to try some selected updates now to see if I can see it for myself. In previous installation I had several profiles and aliased ff with -safe-mode switch to simplify startup of most stable ff. Also found that ff startup in graphic mode worked better (but still without video) with all of the extensions disabled and all of the plugins set to "ask" and always denied ... I have SiS graphic card in SiS Motherboard for XP and ancient Hyundai ImageQuest QV770 monitor. I have Ubuntu 12.04.01 LTS 1 day after install with only the immediate upgrades requested to language pack (English UK). Using FR Alternative keyboard. Connected with domestic wifi network from Orange (FT) I really want to use Skype, but won't bother installing it (again) without video as I can do my sms on FB - whilst ff is not crashed ... Update ... Is something overflowing? I have just had to reboot in order to get ff to restart in any way shape or form - restart on crash form generates new crash form, etc. It was however a good half hour before it crashed so some improvement over conditions before disk corruption. I have now installed all of the critical updates (332 recommended updates still outstanding) which included some relating to ff. Still no video. Still crashing - especially when on Grepolis website. Since the re-install I have had a lovely 1024x768 screen, but after last ff crash and reboot I got a message about 'low graphics mode' and 'setting things myself'. I was not sufficiently tuned in at the time to take proper note - I have no doubt I shall see it again and shall report accordingly. I still have only laptop options for my screen and do not know how to rectify this. Spent a few days with ubuntu on a different, newer machine which has now suffered a graphics breakdown. Returned to this old one again, but with new flat screen Monitor. Found SIS drivers for my graphics BUT it is intended for Red Hat 7.2. I chose this over the version for 7.0 because I thought what the hell, I might not be able to do anything with either of them but this is the later one ... The file will not open with software manager - found a similar problem on Overclock but it has not helped me to install this driver. File name is sis_drv.o-410 and it is currently idling away in my Downloads folder ... I have tried the solution offered on another sis problem, but this shows that my xserver-xorg-video-sis driver is up to date. I am now at a loss as to how to proceed if I can't install the latest sis driver from sis ... Does nobody know how FF changes from "try-me" to "installed"? Any time I MUST have video I reboot from the disk again, but this is tedious! Also one of the things I mock most about MS is the constant rebooting ... UPDATE 10/6/2014 I have installed chromium-browser - worse, crashes even more often than ff.I have installed epiphany - better; Video works but not the associated soundtrack.FireFox is version 14.01 in 'try me' and version 29.0 from my install. Would it be useful to try to downgrade FireFox in order to get video?

    Read the article

  • Data breakpoints to find points where data gets broken

    - by raccoon_tim
    When working with a large code base, finding reasons for bizarre bugs can often be like finding a needle in a hay stack. Finding out why an object gets corrupted without no apparent reason can be quite daunting, especially when it seems to happen randomly and totally out of context. Scenario Take the following scenario as an example. You have defined the a class that contains an array of characters that is 256 characters long. You now implement a method for filling this buffer with a string passed as an argument. At this point you mistakenly expect the buffer to be 256 characters long. At some point you notice that you require another character buffer and you add that after the previous one in the class definition. You now figure that you don’t need the 256 characters that the first member can hold and you shorten that to 128 to conserve space. At this point you should start thinking that you also have to modify the method defined above to safeguard against buffer overflow. It so happens, however, that in this not so perfect world this does not cross your mind. Buffer overflow is one of the most frequent sources for errors in a piece of software and often one of the most difficult ones to detect, especially when data is read from an outside source. Many mass copy functions provided by the C run-time provide versions that have boundary checking (defined with the _s suffix) but they can not guard against hard coded buffer lengths that at some point get changed. Finding the bug Getting back to the scenario, you’re now wondering why does the second string get modified with data that makes no sense at all. Luckily, Visual Studio provides you with a tool to help you with finding just these kinds of errors. It’s called data breakpoints. To add a data breakpoint, you first run your application in debug mode or attach to it in the usual way, and then go to Debug, select New Breakpoint and New Data Breakpoint. In the popup that opens, you can type in the memory address and the amount of bytes you wish to monitor. You can also use an expression here, but it’s often difficult to come up with an expression for data in an object allocated on the heap when not in the context of a certain stack frame. There are a couple of things to note about data breakpoints, however. First of all, Visual Studio supports a maximum of four data breakpoints at any given time. Another important thing to notice is that some C run-time functions modify memory in kernel space which does not trigger the data breakpoint. For instance, calling ReadFile on a buffer that is monitored by a data breakpoint will not trigger the breakpoint. The application will now break at the address you specified it to. Often you might immediately spot the issue but the very least this feature can do is point you in the right direction in search for the real reason why the memory gets inadvertently modified. Conclusions Data breakpoints are a great feature, especially when doing a lot of low level operations where multiple locations modify the same data. With the exception of some special cases, like kernel memory modification, you can use it whenever you need to check when memory at a certain location gets changed on purpose or inadvertently.

    Read the article

  • Does *every* project benefit from written specifications?

    - by nikie
    I know this is holy war territory, so please read the question to the end before answering. There are many cases where written specifications make a lot of sense. For example, if you're a contractor and you want to get paid, you need written specs. If you're working in a team with 20 persons, you need written specs. If you're writing a programming language compiler or interpreter (and it's not perl), you'll usually write a formal specification. I don't doubt that there are many more cases where written specifications are a really good idea. I just think that there are cases where there's so little benefit in written specs, that it doesn't outweigh the costs of writing and maintaining them. EDIT: The close votes say that "it is difficult to say what is asked here", so let me clarify: The usefulness of written, detailed specifications is often claimed like a dogma. (If you want examples, look at the comments.) But I don't see the use of them for the kind of development I'm doing. So what is asked here is: How would written specifications help me? Background information: I work for a small company that's developing vertical market software. If our product is easier to use and has better performance than the competition, it sells. If it's harder to use, even if it behaves 100% as the specification says, it doesn't sell. So there are no "external forces" for having written specs. The advantage would have to be somewhere in the development process. Now, I can see how frozen specifications would make a developer's life easier. But we'll never have frozen specs. If we see in the middle of development that feature X is not intuitive to use the way it's specified, then we can only choose between changing the specification or developing a product that won't sell. You'll probably ask by now: How do you know when you're done? Well, we're continually improving our product. The competition does the same. So (hopefully) we're never done. We keep improving the software, and when we reach a point when the benefits of the improvements we've added since the last release outweigh the costs of an update, we create a new release that is then tested, localized, documented and deployed. This also means that there's rarely any schedule pressure. Nobody has to do overtime to make a deadline. If the feature isn't done by the time we want to release the next version, it'll simply go into the next version. The next question might be: How do your developers know what they're supposed to implement? The answer is: They have a lot of domain knowledge. They know the customers business well enough, so a high-level description of the feature (or even just the problem that the customer needs solved) is enough to implement it. If it's not clear, the developer creates a few fake screens to get feedback from marketing/management or customers, but this is nowhere near the level of detail of actual specifications. This might be inefficient for larger teams, but for a small team with low turnover it works quite well. It has the additional benefit that the developer in question often comes up with a better solution than the person writing the specs might have. This question is already getting very long, but let me address one last point: Testing. Like I said in the beginning, if our software behaves 100% like the spec says, it still can be crap. In fact, if it's so unintuitive that you need a spec to know how to test it, it probably is crap. It makes sense to have fixed, written tests for some core functionality and for regression bugs, but again, this is nowhere near a full written spec of how the software should behave when. The main test is: hand the software to a user who doesn't know it yet and tell him to use the new feature X. If she can figure out how to use it and it works, it works.

    Read the article

  • Application Logging needs work

    Application Logging Application logging is the act of logging events that occur within an application much like how a court report documents what happens in court case. Application logs can be useful for several reasons, but the most common use for logs is to recreate steps to find the root cause of applications errors. Other uses can include the detection of Fraud, verification of user activity, or provide audits on user/data interactions. “Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. “ (OWASP, 2009) OWASP also stats that logging include applicable debugging information like the event date time, responsible process, and a description of the event. “There are many reasons why a logging system is a necessary part of delivering a distributed application. One of the most important is the ability to track exactly how many users are using the application during different time periods.” (Hatton, 2000) Hatton also states that application logging helps system designers determine whether parts of an application aren't being used as designed. He implies that low usage can be used to identify if users like or do not like aspects of a system based on user usage of the application. This enables application designers to extract why users don't like aspects of an application so that changes can be made to increase its usefulness and effectiveness. “Logging memory usage can also assist you in tuning up the internals of your application. If you're experiencing a randomly occurring problem, being able to match activities performed with the memory status at the time may enable you to discover the cause of the problem. It also gives you a good indication of the health of the distributed server machine at the time any activity is performed. “ (Hatton, 2000) Commonly Logged Application Events (Defined by OWASP) Access of Data Creation of Data Modification of Data in any form Administrative Functions  Configuration Changes Debugging Information(Application Events)  Authorization Attempts  Data Deletion Network Communication  Authentication Events  Errors/Exceptions Application Error Logging The functionality associated with application error logging is actually the combination of proper error handling and applications logging.  If we look back at Figure 4 and Figure 5, these code examples allow developers to handle various types of errors that occur within the life cycle of an application’s execution. Application logging can be applied within the Catch section of the TryCatch statement allowing for the errors to be logged when they occur. By placing the logging within the Catch section specific error details can be accessed that help identify the source of the error, the path to the error, what caused the error and definition of the error that occurred. This can then be logged and reviewed at a later date in order recreate the error that was received based data found in the application log. By allowing applications to log errors developers IT staff can use them to recreate errors that are encountered by end-users or other dependent systems.

    Read the article

  • Career-Defining Moments

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2013/06/25/career-defining-moments.aspx Fear holds us back from many things. A little fear is healthy, but don’t let it overwhelm you into missing opportunities. In every career there is a moment when you can either step forward and define yourself, or sit down and regret it later. Why do we hold back: is it fear, constraints, family concerns, or that we simply can't do it? I think in many cases it comes to the unknown, and we are good at fearing the unknown. Some people hold back because they are fearful of what they don’t know. Some hold back because they are fearful of learning new things. Some hold back simply because to take on a new challenge it means they have to give something else up. The phrase sometimes used is “It’s the devil you know versus the one you don’t.” That fear sometimes allows us to miss great opportunities. In many people’s case it is the opportunity to go into business for yourself, to start something that never existed. Most hold back hear for a fear of failing. We’ve all heard the phrase “What would you do if you knew you couldn’t fail?”, which is intended to get people to think about the opportunities they might create. A better term I heard recently on the Ruby Rogues podcast was “What would be worth doing even if you knew you were going to fail?” I think that wording suits the intent better. If you knew (or thought) going in that you were going to fail and you didn’t care, it would open you up to the possibility of paying more attention to the journey and not the outcome. In my case it is a fear of acceptance. I am fearful that I may not learn what I need to learn or may not do a good enough job to be accepted. At the same time that fear drives me and makes me want to leap forward. Some folks would define this as “The Flinch”. I’m learning Ruby and Puppet right now. I have limited experience with both, limited to the degree it scares me some that I don’t know much about either. Okay, it scares me quite a bit! Some people’s defining moment might be going to work for Microsoft. All of you who know me know that I am in love with automation, from low-tech to high-tech automation. So for me, my “mecca” is a little different in that regard. Awhile back I sat down and defined where I wanted my career to go and it had to do more with DevOps, defined as applying developer practices to system administration operations (I could not find this definition when I searched). It’s an area that interests me and why I really want to expand chocolatey into something more awesome. I want to see Windows be as automatable and awesome as other operating systems that are out there. Back to the career-defining moment. Sometimes these moments only come once in a lifetime. The key is to recognize when you are in one of these moments and step back to evaluate it before choosing to dive in head first. So I am about to embark on what I define as one of these “moments.”  On July 1st I will be joining Puppet Labs and working to help make the Windows automation experience rock solid! I’m both scared and excited about the opportunity!

    Read the article

  • MySQL Enterprise Backup 3.8.2 has been released!

    - by Hema Sridharan
    MySQL Enterprise Backup v3.8.2, a maintenance release of online MySQL backup tool, is now available for download from My Oracle Support  (MOS) website as our latest GA release.  It will also be available via the Oracle Software Delivery Cloud in approximately 1-2 weeks. A brief summary of the changes in MySQL Enterprise Backup version 3.8.2 is given below.   A. Functionality Added or Changed:  MySQL Enterprise Backup has a new --on-disk-full command line option. mysqlbackup could hang when the disk became full, rather than detecting the low space condition. mysqlbackup now monitors disk space when running backup commands, and users can now specify the action to take at a disk-full condition with the --on-disk-full option. For more details, refer this page MySQL Enterprise Backup has a new progress report feature, which periodically outputs short progress indicators on its  operations to user-selected destinations (for example, stdout, stderr, a file, or other choices). For more details on progress report options, refer here   B. Bugs Fixed: When --innodb-file-per-table=ON, if a table was renamed and backup-to-image was in progress, apply-log would fail when being run on the backup. (Bug #16903973)   MySQL Server failed to start after a backup was restored if  there had been online DDL transactions on partitioned tables during the time of backup. (Bug #16924499)   apply-log failed if ALTER TABLE ... REORGANIZE PARTITION was applied to partitioned InnoDB tables during backup. (Bug #16721824, Bug #16903951)  apply-incremental-backup might fail with an assertion error if  the InnoDB tables being backed up were created in Barracuda format and with their KEY_BLOCK_SIZE  values  different from the innodb_page_size . This fix ensures that different KEY_BLOCK_SIZE  values are handled properly during incremental backup and apply-incremental-backup operations.  If a table was renamed following a full backup, a subsequent incremental backup could copy the .frm file with the new name, but not the associated .ibd file with the new name. After a  restore, the InnoDB data dictionary could be in an  inconsistent state. This issue primarily occurred if the table  was not changed between the full backup and the subsequent  incremental backup. Bug #16262690)  After a full backup, if a table was renamed and modified,  apply-incremental-backup would crash when run on the backup directory. (Bug #16262609) The value of the binary log position in backup_variables.txt  could be different from the output displayed during the   backup-and-apply-log operation. (This issue did not occur if  the backup and apply-log steps were done separately.) (Bug  #16195529) When using the --only-innodb-with-frm option, MySQL Enterprise Backup tried to create temporary files at unintended locations in the file system, which might cause a failure when, for example, the user had no write privilege for those locations.   This fix makes sure the paths for the temporary files are  correct. (Bug #14787324)  A backup process might hang when it ran into an LSN mismatch between a data file  and the redo log. This fix makes sure the process does not hang and it displays an error message showing the  name of the problematic data file (Bug #14791645) Please post your questions / comments about Backup in forums. Thanks, MEB Team

    Read the article

  • Multidimensional multiple-choice knapsack problem: find a feasible solution

    - by Onheiron
    My assignment is to use local search heuristics to solve the Multidimensional multiple-choice knapsack problem, but to do so I first need to find a feasible solution to start with. Here is an example problem with what I tried so far. Problem R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 11.0 3 2 2 12.0 1 1 3 G2: 20.0 1 1 3 5.0 2 3 2 G3: 10.0 2 2 3 30.0 1 1 3 Sorting strategies To find a starting feasible solution for my local search I decided to ignore maximization of gains and just try to fit the resources requirements. I decided to sort the choices (strategies) in each group by comparing their "distance" from the multidimensional space origin, thus calculating SQRT(R1^2 + R2^2 + ... + RN^2). I felt like this was a keen solution as it somehow privileged those choices with resouce usages closer to each other (e.g. R1:2 R2:2 R3:2 < R1:1 R2:2 R3:3) even if the total sum is the same. Doing so and selecting the best choice from each group proved sufficent to find a feasible solution for many[30] different benchmark problems, but of course I knew it was just luck. So I came up with the problem presented above which sorts like this: R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 12.0 1 1 3 < select this 11.0 3 2 2 G2: 20.0 1 1 3 < select this 5.0 2 3 2 G3: 30.0 1 1 3 < select this 10.0 2 2 3 And it is not feasible because the resources consmption is R1:3, R2:3, R3:9. The easy solution is to pick one of the second best choices in group 1 or 2, so I'll need some kind of iteration (local search[?]) to find the starting feasible solution for my local search solution. Here are the options I came up with Option 1: iterate choices I tried to find a way to iterate all the choices with a specific order, something like G1 G2 G3 1 1 1 2 1 1 1 2 1 1 1 2 2 2 1 ... believeng that feasible solutions won't be that far away from the unfeasible one I start with and thus the number of iterations will keep quite low. Does this make any sense? If yes, how can I iterate the choices (grouped combinations) of each group keeping "as near as possibile" to the previous iteration? Option 2: Change the comparation term I tried to think how to find a better variable to sort the choices on. I thought at a measure of how "precious" a resource is based on supply and demand, so that an higer demand of a more precious resource will push you down the list, but this didn't help at all. Also I thought there probably isn't gonna be such a comparsion variable which assures me a feasible solution at first strike. I there such a variable? If not, is there a better sorting criteria anyways? Option 3: implement any known sub-optimal fast solving algorithm Unfortunately I could not find any of such algorithms online. Any suggestion?

    Read the article

  • Ubuntu 12.10 no network and no graphics

    - by khasiKoMasu
    I recently upgraded Ubuntu 12.04 to 12.10 only to find out that it won't connect to any network, neither wired nor wireless and the graphics is messed up too as in a low screen resolution. For 12.04, my system was running perfectly. I don't know why upgrade messed it up so bad. Reinstalling the OS is an issue because I have set up a lot of development environments that I cannot afford to set it up again. Some of the outputs: lspci -nn | grep 0200 02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) nm-tool NetworkManager Tool State: disconnected cat /etc/network/interfaces auto lo iface lo inet loopback sudo cat /var/log/syslog | grep etwork | tail -n20 Nov 2 13:50:22 Cobalt NetworkManager[978]: SCPlugin-Ifupdown: (-1240454760) ... get_connections (managed=false): return empty list. Nov 2 13:50:22 Cobalt NetworkManager[978]: Ifupdown: get unmanaged devices count: 0 Nov 2 13:50:22 Cobalt bluetoothd[1016]: Failed to init network plugin Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> modem-manager is now available Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> monitoring kernel firmware directory '/lib/firmware'. Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> WiFi enabled by radio killswitch; enabled by state file Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> WWAN enabled by radio killswitch; enabled by state file Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> WiMAX enabled by radio killswitch; enabled by state file Nov 2 13:50:22 Cobalt NetworkManager[978]: <info> Networking is enabled by state file Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> /sys/devices/virtual/net/lo: couldn't determine device driver; ignoring... Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> /sys/devices/virtual/net/lo: couldn't determine device driver; ignoring... Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) Nov 2 13:50:22 Cobalt kernel: [ 28.688167] type=1400 audit(1351882222.452:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=1046 comm="apparmor_parser" Nov 2 13:50:22 Cobalt bluetoothd[1062]: Failed to init network plugin Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) Nov 2 13:50:22 Cobalt bluetoothd[1118]: Failed to init network plugin Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) Nov 2 13:50:22 Cobalt bluetoothd[1237]: Failed to init network plugin Nov 2 13:50:22 Cobalt NetworkManager[978]: <warn> bluez error getting default adapter: Message did not receive a reply (timeout by message bus) ps aux | grep -i network root 978 0.0 0.1 23732 4808 ? Ssl 13:50 0:00 NetworkManager sudo modprobe -r forcedeth FATAL: Module forcedeth not found

    Read the article

  • What do you need to know to be a world-class master software developer? [closed]

    - by glitch
    I wanted to bring up this question to you folks and see what you think, hopefully advise me on the matter: let's say you had 30 years of learning and practicing software development in front of you, how would you dedicate your time so that you'd get the biggest bang for your buck. What would you both learn and work on to be a world-class software developer that would make a large impact on the industry and leave behind a legacy? I think that most great developers end up being both broad generalists and specialists in one-two areas of interest. I'm thinking Bill Joy, John Carmack, Linus Torvalds, K&R and so on. I'm thinking that perhaps one approach would be to break things down by categories and establish a base minimum of "software development" greatness. I'm thinking: Operating Systems: completely internalize the core concepts of OS, perhaps gain a lot of familiarity with an OSS one such as Linux. Anything from memory management to device drivers has to be complete second nature. Programming Languages: this is one of those topics that imho has to be fully grokked even if it might take many years. I don't think there's quite anything like going through the process of developing your own compiler, understanding language design trade-offs and so on. Programming Language Pragmatics is one of my favorite books actually, I think you want to have that internalized back to back, and that's just the start. You could go significantly deeper, but I think it's time well spent, because it's such a crucial building block. As a subset of that, you want to really understand the different programming paradigms out there. Imperative, declarative, logic, functional and so on. Anything from assembly to LISP should be at the very least comfortable to write in. Contexts: I believe one should have experience working in different contexts to truly be able to appreciate the trade-offs that are being made every day. Embedded, web development, mobile development, UX development, distributed, cloud computing and so on. Hardware: I'm somewhat conflicted about this one. I think you want some understanding of computer architecture at a low level, but I feel like the concepts that will truly matter will be slightly higher level, such as CPU caching / memory hierarchy, ILP, and so on. Networking: we live in a completely network-dependent era. Having a good understanding of the OSI model, knowing how the Web works, how HTTP works and so on is pretty much a pre-requisite these days. Distributed systems: once again, everything's distributed these days, it's getting progressively harder to ignore this reality. Slightly related, perhaps add solid understanding of how browsers work to that, since the world seems to be moving so much to interfacing with everything through a browser. Tools: Have a really broad toolset that you're familiar with, one that continuously expands throughout the years. Communication: I think being a great writer, effective communicator and a phenomenal team player is pretty much a prerequisite for a lot of a software developer's greatness. It can't be overstated. Software engineering: understanding the process of building software, team dynamics, the requirements of the business-side, all the pitfalls. You want to deeply understand where what you're writing fits from the market perspective. The better you understand all of this, the more of your work will actually see the daylight. This is really just a starting list, I'm confident that there's a ton of other material that you need to master. As I mentioned, you most likely end up specializing in a bunch of these areas as you go along, but I was trying to come up with a baseline. Any thoughts, suggestions and words of wisdom from the grizzled veterans out there who would like to share their thoughts and experiences with this? I'd really love to know what you think!

    Read the article

  • I thought the new AUTO_SAMPLE_SIZE in Oracle Database 11g looked at all the rows in a table so why do I see a very small sample size on some tables?

    - by Maria Colgan
    I recently got asked this question and thought it was worth a quick blog post to explain in a little more detail what is going on with the new AUTO_SAMPLE_SIZE in Oracle Database 11g and what you should expect to see in the dictionary views. Let’s take the SH.CUSTOMERS table as an example.  There are 55,500 rows in the SH.CUSTOMERS tables. If we gather statistics on the SH.CUSTOMERS using the new AUTO_SAMPLE_SIZE but without collecting histogram we can check what sample size was used by looking in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views. The sample sized shown in the USER_TABLES is 55,500 rows or the entire table as expected. In USER_TAB_COL_STATISTICS most columns show 55,500 rows as the sample size except for four columns (CUST_SRC_ID, CUST_EFF_TO, CUST_MARTIAL_STATUS, CUST_INCOME_LEVEL ). The CUST_SRC_ID and CUST_EFF_TO columns have no sample size listed because there are only NULL values in these columns and the statistics gathering procedure skips NULL values. The CUST_MARTIAL_STATUS (38,072) and the CUST_INCOME_LEVEL (55,459) columns show less than 55,500 rows as their sample size because of the presence of NULL values in these columns. In the SH.CUSTOMERS table 17,428 rows have a NULL as the value for CUST_MARTIAL_STATUS column (17428+38072 = 55500), while 41 rows have a NULL values for the CUST_INCOME_LEVEL column (41+55459 = 55500). So we can confirm that the new AUTO_SAMPLE_SIZE algorithm will use all non-NULL values when gathering basic table and column level statistics. Now we have clear understanding of what sample size to expect lets include histogram creation as part of the statistics gathering. Again we can look in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views to find the sample size used. The sample size seen in USER_TABLES is 55,500 rows but if we look at the column statistics we see that it is same as in previous case except  for columns  CUST_POSTAL_CODE and  CUST_CITY_ID. You will also notice that these columns now have histograms created on them. The sample size shown for these columns is not the sample size used to gather the basic column statistics. AUTO_SAMPLE_SIZE still uses all the rows in the table - the NULL rows to gather the basic column statistics (55,500 rows in this case). The size shown is the sample size used to create the histogram on the column. When we create a histogram we try to build it on a sample that has approximately 5,500 non-null values for the column.  Typically all of the histograms required for a table are built from the same sample. In our example the histograms created on CUST_POSTAL_CODE and the CUST_CITY_ID were built on a single sample of ~5,500 (5,450 rows) as these columns contained only non-null values. However, if one or more of the columns that requires a histogram has null values then the sample size maybe increased in order to achieve a sample of 5,500 non-null values for those columns. n addition, if the difference between the number of nulls in the columns varies greatly, we may create multiple samples, one for the columns that have a low number of null values and one for the columns with a high number of null values.  This scheme enables us to get close to 5,500 non-null values for each column. +Maria Colgan

    Read the article

  • ?SPARC T4?????????????·???? : Netra SPARC T4-1

    - by user13138700
    ?SPARC T4???????????????·??????? Netra SPARC T4-1 ???? Netra SPARC T4-2 ?2012?1?10??????????3?15??????????????(????) ?????????? Netra SPARC T4-1 ? 4core ???( T4 ???????? 4core ???)(*)???????????????????????????(*)( Netra SPARC T4-1 ?????? 4core ???? 8core ????????) ??? prtdiag ????? pginfo ??????????????? 8????/1core ???? prtdiag ????????4core=32???????????????pginfo ?????????????????core ???????????????????? # ./prtdiag -v System Configuration: Oracle Corporation sun4v Netra SPARC T4-1 ???????: 130560 M ??? ================================ ?? CPU ================================ CPU ID Frequency Implementation Status ------ --------- ---------------------- ------- 0 2848 MHz SPARC-T4 on-line 1 2848 MHz SPARC-T4 on-line 2 2848 MHz SPARC-T4 on-line 3 2848 MHz SPARC-T4 on-line 4 2848 MHz SPARC-T4 on-line 5 2848 MHz SPARC-T4 on-line 6 2848 MHz SPARC-T4 on-line 7 2848 MHz SPARC-T4 on-line 8 2848 MHz SPARC-T4 on-line 9 2848 MHz SPARC-T4 on-line 10 2848 MHz SPARC-T4 on-line 11 2848 MHz SPARC-T4 on-line 12 2848 MHz SPARC-T4 on-line 13 2848 MHz SPARC-T4 on-line 14 2848 MHz SPARC-T4 on-line 15 2848 MHz SPARC-T4 on-line 16 2848 MHz SPARC-T4 on-line 17 2848 MHz SPARC-T4 on-line 18 2848 MHz SPARC-T4 on-line 19 2848 MHz SPARC-T4 on-line 20 2848 MHz SPARC-T4 on-line 21 2848 MHz SPARC-T4 on-line 22 2848 MHz SPARC-T4 on-line 23 2848 MHz SPARC-T4 on-line 24 2848 MHz SPARC-T4 on-line 25 2848 MHz SPARC-T4 on-line 26 2848 MHz SPARC-T4 on-line 27 2848 MHz SPARC-T4 on-line 28 2848 MHz SPARC-T4 on-line 29 2848 MHz SPARC-T4 on-line 30 2848 MHz SPARC-T4 on-line 31 2848 MHz SPARC-T4 on-line ======================= Physical Memory Configuration ======================== ???? # pginfo -p -T 0 (System [system,chip]) CPUs: 0-31 `-- 3 (Data_Pipe_to_memory [system,chip]) CPUs: 0-31 |-- 2 (Floating_Point_Unit [core]) CPUs: 0-7 | `-- 1 (Integer_Pipeline [core]) CPUs: 0-7 |-- 5 (Floating_Point_Unit [core]) CPUs: 8-15 | `-- 4 (Integer_Pipeline [core]) CPUs: 8-15 |-- 7 (Floating_Point_Unit [core]) CPUs: 16-23 | `-- 6 (Integer_Pipeline [core]) CPUs: 16-23 `-- 9 (Floating_Point_Unit [core]) CPUs: 24-31 `-- 8 (Integer_Pipeline [core]) CPUs: 24-31 T4 ????????????????????????????????????????????????? T3 ?????(S2 core)?????T4 ?????(S3 core)?????????????5???????????? T3 ?????(S2 core)?????????????????????????(????????)?????????????????????????????????????????????·???????????????????????????????????????? ????T4 ????????????????????????????T4 ??????????·??????? Netra SPARC T4-1 4core ????????????????????????????????????T3 ???????????????????????????? ?????????Netra SPARC T4-1 ??????????????? Netra SPARC T4-1 ?? Computing 1 x SPARC T4 4?? 32???? or 8 ?? 64 ???? 2.85GHz CPU (1?????8????) 16 x DDR3 DIMM (?? 256GB ?????16GB DIMM ???) I/O and Storage 3 x Low Profile PCI-Express Gen2 ???? (2 x 10Gb Ethernet XAUI ???????) 2 x Full-height Half-length PCI-Express Gen2 ???? 4 x 10/100/1000 Ethernet ???????? 4 x 2.5” SAS2 HDD 4 x USB ??? (?? 2, ?? 2) RAS and Management and Power Supply ???? (RAID????), ????PSU ?????????? ILOM?????????????? 2N (1+1) , AC ???? DC ?? Support OS Oracle Solaris 10 10/9, 9/10, 8/11, Oracle Solaris 11 11/11 Oracle VM Server for SPARC 2.1 (LDoms) ???? ??? NEBS Level3?? ??????21” 19”(EIA-310D),23”,24”,600mm????? ?????(?????)????????? ????SPARC T4 ????????SPARC T4 ?????????????????????????(4???)???????????? Oracle OpenWorld Tokyo 2012 ?3??(4/4(?)?4/5(?)?4/6(?))?????????????????????&?????????????????SPARC T4 ?????????????????????????????????·?????????????????SPARC T4 ???????????????????!? Oracle OpenWorld Tokyo 2012 http://www.oracle.com/openworld/jp-ja/index.html ????·???????????? 4/6(?) Develop D3-13 (14:00 - 14:45) ???????????49 ??? ?????? 7264 ???????????????

    Read the article

  • Play audio file data - Spring MVC

    - by Vijay Veeraraghavan
    In my web-application, I have various audio clips uploaded by the users in the database stored in the BLOB column. The audio files are low bit rate WAV files. The clips are secured, one can see only those clips he has uploaded. Instead of user downloading the clip and playing it in his player, I need it be steamed online in the web page itself. In the jsp I use the <audio> tag with the source mapping to the controller mappping url. <td> <audio controls><source src="recfile/${au.id}" type="audio/mpeg" /></audio> </td> Where, the recfile is the request mapping and the au.id is the audio id. In the controller I process the request like below @RequestMapping(value = "/recfile/{id}", method = RequestMethod.GET, produces = { MediaType.APPLICATION_OCTET_STREAM_VALUE }) public HttpEntity<byte[]> downloadRecipientFile(@PathVariable("id") int id, ModelMap model, HttpServletResponse response) throws IOException, ServletException { LOGGER.debug("[GroupListController downloadRecipientFile]"); VoiceAudioLibrary dGroup = audioClipService.findAudioClip(id); if (dGroup == null || dGroup.getAudioData() == null || dGroup.getAudioData().length <= 0) { throw new ServletException("No clip found/clip has not data, id=" + id); } HttpHeaders header = new HttpHeaders(); I tried this too //header.setContentType(new MediaType("audio", "mp3")); header.setContentType(new MediaType("audio", "vnd.wave"); header.setContentLength(dGroup.getAudioData().length); return new HttpEntity<byte[]>(dGroup.getAudioData(), header); } When the jsp loads, the controller get the request, it serves back the audio data fetched from the database, the jsp too shows the player with the controls. But when I play it nothing happens. Why is it? Am I missing anything in the configuration? Am I doing it right?

    Read the article

  • Use MTOM/streaming from C# calling a webservice in java exposed via jaxws

    - by raticulin
    We have this webservice created with jax-ws @WebService(name = "Mywebser", targetNamespace = "http://namespace") @MTOM(threshold = 2048) @SOAPBinding(style = SOAPBinding.Style.DOCUMENT, use = SOAPBinding.Use.LITERAL, parameterStyle = SOAPBinding.ParameterStyle.WRAPPED) public class Mywebser { @WebMethod(operationName = "doStreaming", action = "urn:doStreaming") @WebResult(name = "return") public ResultInfo doStreaming(String username, String pwd, @XmlMimeType("application/octet-stream") DataHandler data, boolean overw){ ... } } The generated client side looks like this: @WebMethod(action = "urn:doStreaming") @WebResult(targetNamespace = "") @RequestWrapper(localName = "doStreaming", targetNamespace = "http://namespace", className = "com.mypack.client.doStreaming") @ResponseWrapper(localName = "doStreamingResponse", targetNamespace = "http://namespace", className = "com.mypack.client.doStreamingResponse") public ResultInfo doStreaming( @WebParam(name = "arg0", targetNamespace = "") String arg0, @WebParam(name = "arg1", targetNamespace = "") String arg1, @WebParam(name = "arg2", targetNamespace = "") DataHandler arg2, @WebParam(name = "arg3", targetNamespace = "") boolean arg3); By using it this way it uses streaming properly (verified we can pass an argument of 80mb when the jvm had less allowed. MywebserService serv = ...; Mywebser wso = serv.getMywebserPort(new MTOMFeature()); Map<String, Object> ctxt = ((BindingProvider) wso).getRequestContext(); ctxt.put(JAXWSProperties.HTTP_CLIENT_STREAMING_CHUNK_SIZE, 8192); DataHandler dataHandler = new DataHandler(new FileDataSource("c:\\temp\\A.dat")); arcres = wso.doStreaming("a", "b", dataHandler, true); We generate a clienet for .net, with VS2008, using "Add Web Reference", we get this C# code: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("urn:doStreaming",RequestNamespace="http://namespace",ResponseNamespace="http://namespace",Use=System.Web.Services.Description.SoapBindingUse.Literal,ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] [return: System.Xml.Serialization.XmlElementAttribute("return",Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] public ResultInfo doStreaming( [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] string arg0, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] string arg1, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified,DataType="base64Binary")] byte[] arg2, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] bool arg3) Apparently this is not using streaming? The type base64Binary of arg2 seems not the right one? In java it's a DataHandler. By testing it with low memory on the java side we can see it is not using streaming as it fails with OOM. Does someone knows if this is possible, and if so how? Our environment: server: jdk1.6, jaxws 2.1.7 client: C# 2.0, visual studio 2008

    Read the article

  • Detect user logout / shutdown in Python / GTK under Linux

    - by Ivo Wetzel
    OK this is presumably a hard one, I've got an pyGTK application that has random crashes due to X Window errors that I can't catch/control. So I created a wrapper that restarts the app as soon as it detects a crash, now comes the problem, when the user logs out or shuts down the system, the app exits with status 1. But on some X errors it does so too. So I tried literally anything to catch the shutdown/logout, with no success, here's what I've tried: import pygtk import gtk import sys class Test(gtk.Window): def delete_event(self, widget, event, data=None): open("delete_event", "wb") def destroy_event(self, widget, data=None): open("destroy_event", "wb") def destroy_event2(self, widget, event, data=None): open("destroy_event2", "wb") def __init__(self): gtk.Window.__init__(self, gtk.WINDOW_TOPLEVEL) self.show() self.connect("delete_event", self.delete_event) self.connect("destroy", self.destroy_event) self.connect("destroy-event", self.destroy_event2) def foo(): open("add_event", "wb") def ex(): open("sys_event", "wb") from signal import * def clean(sig): f = open("sig_event", "wb") f.write(str(sig)) f.close() exit(0) for sig in (SIGABRT, SIGILL, SIGINT, SIGSEGV, SIGTERM): signal(sig, lambda *args: clean(sig)) def at(): open("at_event", "wb") import atexit atexit.register(at) f = Test() sys.exitfunc = ex gtk.quit_add(gtk.main_level(), foo) gtk.main() open("exit_event", "wb") Not one of these succeeds, is there any low level way to detect the system shutdown? Google didn't find anything related to that. I guess there must be a way, am I right? :/

    Read the article

  • Xcode: Unable to open project... cannot be opened because the project file cannot be parsed...

    - by Chris Butler
    Hi everyone. I have been working for a while to create an iPhone app. Today when my battery was low, I was working and constantly saving my source files then the power went out... Now when I plugged my computer back in and it is getting good power I try to open my project file and I get an error: "Unable to Open Project Project ... cannot be opened because the project file cannot be parsed." Is there a way that people know of that I can recover from this? I tried using an older project file and re inserting it and then compiling. It gives me a funky error which is probably because it isn't finding all the files it wants... I really don't want to rebuild my project from scratch if possible. Thanks in advance. EDIT Ok, I did a diff between this and a slightly older project file that worked and saw that there was some corruption in the file. After merging them (the good and newest parts) it is now working. Great points about the SVN. I have one, but there has been some funkiness trying to sync XCode with it. I'll definitely spend more time with it now... ;-) Thanks for everyone's comments and suggestions.

    Read the article

  • Managing Dependency Hell with WiX and C#

    - by Tom the Junglist
    We are on the eve of product launch, and at the last minute I am being bombarded with crash reports that appear to be related to our installer, which is a WiX3 project with separate outputs for x86 and x64 builds. These have been an ongoing problem that I always thought were fixed, only to find out that they were still lurking. The product itself is a collection of binaries that communicate with each other via .Net remoting, including a Windows Service and a small COM component that is loaded as an addon in another app. The service runs as SYSTEM, the COM piece runs in a low-rights context, while the other pieces run in normal user contexts. Other pieces include an third-party COM object library DLL and a shared DLL with the .net Remoting interfaces. I've observed flat-out weird behavior with MSI, particularly on version upgrades. Between MS' anal strong-name implementation (specifically, the exact version check before loading a given assembly), a documented WiX/MSI bug that sees critical files erased on upgrades (essentially, if a file in the upgrade MSI has the same version number as the existing install, that file is deleted), and having to work around Wow64 virtualization (x86 MSI can only write to registry/HD locations via Wow64, yet x64 MSIs cannot run on x86 computers...), I am about ready to trash the whole thing and port it over to a different install system. What I am looking for on tips + tricks, techniques, or suggestions on how to properly do things so that I am not fighting with Windows Installer's twisted sense of logic. I am tired of fighting with WiX/MSI/Windows Installer. All it needs to do is place files and registry keys where I tell it to, upgrade them when appropriate, and don't delete anything until the user uninstalls. Instead, dependencies are deleted willy-nilly, bringing up a whole bunch of uncatchable exceptions (can't wrap a try{} block around function declarations) and GPF'ing the whole app. I am particularly interested in 'best practices' and examples regarding shared and dependency DLLs, and any tips on making sure if a file needs to go to GAC, that it actually goes to the GAC and stays there until it is appropriate to remove it. Thanks! Tom

    Read the article

  • WCF listenBacklog and maxConnections can't be set higher than 10. Why not?

    - by NeedWCFPro
    My service works great under low load. But under high load I start to get connection errors. I know about other settings but I am trying to change the listenBacklog parameter in particular for my TCP Buffered binding. If I set listenBacklog="10" I am able to telnet into the port where my WCF service is running. If I change listenBacklog to anything higher than 10 it will not let me telnet into my service when it is running. No errors seem to be thrown. What can I do? I get the same problem when I change my maxConnections away from 10. All other properties of the binding I can set higher without a problem. Here is what my binding looks like: <bindings> <netTcpBinding> <binding name="NetTcpBinding_IMyService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="1048576" maxConnections="10" maxReceivedMessageSize="1048576"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign"> </transport> <message clientCredentialType="Windows" /> </security> </binding> ... I really need to increase the values of maxConnections and listenBacklog

    Read the article

  • ExtJS: Combobox in EditorGridPanel not selecting the desired item (with test case)

    - by TomH
    I'm using ExtJS to create an EditorGridPanel with a combobox for an editor in a cell. The combobox in my EditorGridPanel that is not working as I'd expect it to. When the user types the first letter of an item in the drop down list, the combobox seems to ignore it and select the first item in the list. I can reproduce the error consistently and have put together a test case here: http://cluebucket.com/dev/testcase/testcase.html Load the page and reproduce the behavior by the following -- note that this is all done using the keyboard, no mouse clicks: Click 'Add Record' (A new row is added to the grid) enter text in the text field. TAB to the Priority field without selecting anything (None will remain selected) TAB out of the Priority field. (A new row is added to the grid) enter text and TAB to the Priority field TYPE v (Very High is selected) TAB out of the priority field (A new row is added to the grid) enter text and TAB to the Priority field Type v (None is selected, but Very High should have been) TAB out of the priority field Enter text and TAB to the priority field Type l ('el') (Low is selected) TAB out, enter text, TAB to priority Type l (None is selected) It appears that whenever the user attempts to select the same value that was selected in the previous row, the combobox selects None. Any ideas? The code is available at cluebucket.com/dev/testcase/js/testcase.js Thoughts/Pointers/Corrections are appreciated!! thanks tom

    Read the article

  • Merge method in MergeSort Algorithm .

    - by Tony
    I've seen many mergeSort implementations .Here is the version in Data Structures and Algorithms in Java (2nd Edition) by Robert Lafore : private void recMergeSort(long[] workSpace, int lowerBound,int upperBound) { if(lowerBound == upperBound) // if range is 1, return; // no use sorting else { // find midpoint int mid = (lowerBound+upperBound) / 2; // sort low half recMergeSort(workSpace, lowerBound, mid); // sort high half recMergeSort(workSpace, mid+1, upperBound); // merge them merge(workSpace, lowerBound, mid+1, upperBound); } // end else } // end recMergeSort() private void merge(long[] workSpace, int lowPtr, int highPtr, int upperBound) { int j = 0; // workspace index int lowerBound = lowPtr; int mid = highPtr-1; int n = upperBound-lowerBound+1; // # of items while(lowPtr <= mid && highPtr <= upperBound) if( theArray[lowPtr] < theArray[highPtr] ) workSpace[j++] = theArray[lowPtr++]; else workSpace[j++] = theArray[highPtr++]; while(lowPtr <= mid) workSpace[j++] = theArray[lowPtr++]; while(highPtr <= upperBound) workSpace[j++] = theArray[highPtr++]; for(j=0; j<n; j++) theArray[lowerBound+j] = workSpace[j]; } // end merge() One interesting thing about merge method is that , almost all the implementations didn't pass the lowerBound parameter to merge method . lowerBound is calculated in the merge . This is strange , since lowerPtr = mid + 1 ; lowerBound = lowerPtr -1 ; that means lowerBound = mid ; Why the author didn't pass mid to merge like merge(workSpace, lowerBound,mid, mid+1, upperBound); ? I think there must be a reason , otherwise I can't understand why an algorithm older than half a center ,and have all coincident in the such little detail.

    Read the article

  • How good is Dotfuscator Community Edition? What is "good enough obfuscator"?

    - by zendar
    I plan to release one small, low priced utility. Since this is more hobby than business, I planned to use Dotfuscator Community Edition that is shipped with VS2008. How good is it? I could also use definition of "good enough obfuscator" - what features are missing from Dotfuscator Community Edition to make it good enough. Edit: I checked pricing on number of commercial obfuscators and they cost a lot. Is it worth it? Are commercial versions that much better protecting from reverse engineering? I'm not very afraid of my application being cracked (it will be disappointing if application is so bad that no one is interested in cracking it). It's not heavily protected anyway, not overly complex serial key and licence checks on few places in code. It just bugs me that without obfuscation, somebody can easily get source code, rebrand it and sell it as its own. Does this happens a lot? Edit 2: Can somebody recommend commercial obfuscator. I found lots of them, all of them are expensive, some even don't have price listed on web site. Feature wise, all products seem more or less similar. What is minimal set of features obfuscator should have?

    Read the article

  • how many types of code signing certificates do I need?

    - by gerryLowry
    in Canada, website SSL certificates can be had for as low as US$10. unfortunately, code signing certificates cost about 10 time as much, one website mentions Vista compatibility ... this seems strange because my assumption is they must support XP, Vista, Windows 7, Server 2003, and Server 2008 or they would be useless. https://secure.ksoftware.net/code_signing.html US$99 Support Platforms Microsoft Authenticode. Sign any Microsoft executable format (32 and 64 bit EXE, DLL, OCX, DLL or any Active X control). Signing hardware drivers is not currently supported. Abode AIR. Sign any Adobe AIR application. Java. Sign any JAR applet Microsoft Office. Sign any MS Office Macro or VBA (Visual Basic for Applications) file. Mozilla. Sign any Mozilla Object file. The implication is that a single code signing certificate can do ALL of the above. ksoftware actually discounts Commodo certificates and the Commode website is unclear. QUESTION: Will ONE code signing certificate be enough or do I need one for Microsoft executables, and a second for things like Word and Excel macros? my main goal is to sign things like vs2008 code snippets so that I can export them securely; however, I would like to be able to use the same code signing certificate for signing other items too. Thank you ~~ regards, Gerry (Lowry)

    Read the article

  • Advice for last year college graduates

    - by Tomh
    Hey guys, I know there are many "advice" questions around this site. But I wanted to to narrow mine down to last year college students, in my case my last year as Master student in computer science. So far is a list of things I've done during my time in college (which I can recommend others to do aswell): Code a lot I've written several hobby projects, had part time jobs, entered the Imagine cup from Microsoft, took programming extensive courses and did freelance gigs. Read a lot I've bought most top books from the recommended book topics here, to be honest I have not read them all. learn different languages I've tried several languages including Haskell, Java, Python, Ruby, Lisp, Prolog, C#, PHP, JS, AS3 and possibly some more I forgot. Tried to start a blog Joel recommends to learn how to write, I tried starting a couple of blogs to improve upon this, I gave up on all instances after writing about three posts. It was just not my thing... Have a portfolio of launched projects/programs I'm busy with this, have a couple of finished, working projects I worked on to show to people. So this is my last year. Is there anything else you can recommend a last year college student to do before hitting the job market? Personally I'm tempted to spend my time on the following: Practice algorithm design Learn and memorize the usage of the low level API's of your favorite language Polish your portfolio Why? Because those first two will make sure you pass the majority of the interviews, here in Holland (I could be wrong). I rather not spend my time on those first two points, but I have to be realistic and thats just my experience on what kind of questions you'll get when you apply. The third point is my hope that I won't have to answer questions about the amount of standard types in c# for example if they can see I get projects done and launched. But I'm still graduating, so I don't know anything :), and many of you might be hiring grads on a recent base and could tell me and other interested people what you wish that the recent grads you interviewed would have done before they applied.

    Read the article

  • How to handle ViewModel and Database in C#/WPF/MVVM App

    - by Mike B
    I have a task management program with a "Urgency" field. Valid values are Int16 currently mapped to 1 (High), 2 (Medium), 3 (Low), 4 (None) and 99 (Closed). The urgency field is used to rank tasks as well as alter the look of the items in the list and detail view. When a user is editing or adding a new task they select or view the urgency in a ComboBox. A Converter passes Strings to replace the Ints. The urgency collection is so simple I did not make it a table in the database, instead it is a, ObservableCollection(Int16) that is populated by a method. Since the same screen may be used to view a closed task the "Closed" urgency must be in the ItemsSource but I do not want the user to be able to select it. In order to prevent the user from being able to select that item in the ComboBox but still be able to see it if the item in the database has that value should I... Manually disable the item in the ComboBox in code or Xaml (I doubt it) Change the Urgency collection from an Int16 to an Object with a Selectable Property that the isEnabled property of the ComboBoxItem Binds to. Do as in 2 but also separate the urgency information into its own table in the database with a foreign key in the Tasks table None of the above (I suspect this is the correct answer) I ask this because this is a learning project (My first real WPF and first ever MVVM project). I know there is rarely one Right way to do something but I want to make sure I am learning in a reasonable manner since it if far harder to Unlearn bad habits Thanks Mike

    Read the article

  • ssh tunneling with visualsvn

    - by DeveloperChris
    I have been asked to setup visualsvn for visual studio 2008 Due to firewall restrictions and server configuration. I need to use ssh tunneling. My problem is this. The local machine needs to connect to a gateway machine via ssh then connect to the subversion server so Local machine ---{ssh}--- gateway ---{ssh}-- subversion server I am not exactly sure of the correct process to do this. It appears that I must start a ssh process using plink to open a local port and forward that to the remote subversion server. eg: plink user@gateway -L 22:192.168.1.1:22 Then when visualsvn starts it uses tortoiseplink to make the actual connection through to the subversion server using svn+ssh://username@localhost:22/myrepo This seems very very clunky. firstly it needs several steps to setup the connection secondly I need plink running which leaves a command prompt on the desktop (clutter = yuck) lastly I need to use two different programs that do the same thing. (plink + tortoiseplink) The problem is that tortoiseplink doesn't run in the background. As soon as I connect to the ssh gateway and enter the password it closes again. So I can't use it to create the initial connection. If I use plink instead of tortoiseplink in visualsvn then I never get prompted for the password. so it just hangs with an open command prompt and no password request. Is there a way to setup visualsvn so that everything happens in one command line? I have searched high and low for a suitable and clean method to tunnel from visualsvn to the remote server and have found very little. it all either assumes one hop (not two like mine) or it glosses over all the hard bits. DC

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >