Search Results

Search found 93649 results on 3746 pages for 'protector one'.

Page 777/3746 | < Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • Check parameters annotated with @Nonnull for null?

    - by David Harkness
    We've begun using FindBugs with and annotating our parameters with @Nonnull appropriately, and it works great to point out bugs early in the cycle. So far we have continued checking these arguments for null using Guava's checkNotNull, but I would prefer to check for null only at the edges--places where the value can come in without having been checked for null, e.g., a SOAP request. // service layer accessible from outside public Person createPerson(@CheckForNull String name) { return new Person(Preconditions.checkNotNull(name)); } ... // internal constructor accessed only by the service layer public Person(@Nonnull String name) { this.name = Preconditions.checkNotNull(name); // remove this check? } I understand that @Nonnull does not block null values itself. However, given that FindBugs will point out anywhere a value is transferred from an unmarked field to one marked @Nonnull, can't we depend on it to catch these cases (which it does) without having to check these values for null everywhere they get passed around in the system? Am I naive to want to trust the tool and avoid these verbose checks? Bottom line: While it seems safe to remove the second null check below, is it bad practice? This question is perhaps too similar to Should one check for null if he does not expect null, but I'm asking specifically in relation to the @Nonnull annotation.

    Read the article

  • Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots as a new subversion revision to the source tree. The end result being that the HEAD revision is the same as the last snapshot, and other revisions are as numbered. Some other requirements: that the HEAD revision is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the HEAD revision. meanwhile, there should be continuity between files that do persist between snapshots. Subversion should know that there are previous versions of these files and not treat them as brand new files within each revision. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in Subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering as I would also like a solution in GIT (I will post an answer separately for GIT so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same question but for GIT: Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree? An outline answer for Subversion in stackoverflow.com but not enough specifics about the script: what commands to use, code to check valid scenarios if necessary - i.e. a working script basically. http://stackoverflow.com/questions/2203818/is-there-anyway-to-import-xcode-snapshots-into-a-new-svn-repository

    Read the article

  • At what point would you drop some of your principles of software development for the sake of more money?

    - by MeshMan
    I'd like to throw this question out there to interestingly see where the medium is. I'm going to admit that in my last 12 months, I picked up TDD and a lot of the Agile values in software development. I was so overwhelmed with how much better my development of software became that I would never drop them out of principle. Until...I was offered a contracting role that doubled my take home pay for the year. The company I joined didn't follow any specific methodology, the team hadn't heard of anything like code smells, SOLID, etc., and I certainly wasn't going to get away with spending time doing TDD if the team had never even seen unit testing in practice. Am I a sell out? No, not completely... Code will always been written "cleanly" (as per Uncle Bob's teachings) and the principles of SOLID will always be applied to the code that I write as they are needed. Testing was dropped for me though, the company couldn't afford to have such a unknown handed to the team who quite frankly, even I did create test frameworks, they would never use/maintain the test framework correctly. Using that as an example, what point would you say a developer should never drop his craftsmanship principles for the sake of money/other benefits to them personally? I understand that this can be a very personal opinion on how concerned one is to their own needs, business needs, and the sake of craftsmanship etc. But one can consider that for example testing can be dropped if the company decided they would rather have a test team, than rather understand unit testing in programming, would that be something you could forgive yourself for like I did? So given that there is something you would drop, there usually should be an equal cost in the business that makes up for what you drop - hopefully, unless of course you are pretty much out for lining your own pockets and not community/social collaborating ;). Double your money, go back to RAD? Or walk on, and look for someone doing Agile, and never look back...

    Read the article

  • Is this a link scheme? If so, what to do? what problems can i face?

    - by guisasso
    I was asked to remodel a website, and decided to check its rank on alexa. Surprisingly, there are many, many different websites linking to it, none relevant. One particular thing about it is that none of these urls work, and they all display the exact same error when accessed, which to me is a very good indication that this is some sort of linking scheme. (besides the somewhat obvious names, it even says scheme in one of the urls !?) If so, how should i proceed about this website? What can i do if this is in fact a scheme, how can this hurt the website, what types of problems can i face, and what can i do about it? addurlnow . info dirlist15.addurlnow . info/Business___Economy/Services/page-12.html linkdirectory101 . info dirlist16.linkdirectory101 . info/Business___Economy/Services/page-15.html seonetblog . info dirlist52.seonetblog . info/Business___Economy/Affiliate_Schemes addurls . us dirlist21.addurls . us/Business___Economy/Services/page-10.html webdirectoriessite . info dirlist20.webdirectoriessite . info/Business___Economy/Services/page-6.html addurlstore . info dirlist10.addurlstore . info/business___economy/services/page-14.html ukwebdirectorys . info dirlist21.ukwebdirectorys . info/Business___Economy/Services/page-13.html

    Read the article

  • How to organize music files?

    - by newbie
    I'm trying to re-organize my music files. I copied them from my iPod before it died, so they have funky names, like DGEDH.mp3. I tried using a Windows utility to rename them from their ID3 tags, but it didn't work very well--it created a bunch of folders (one for each album in theory, but more like 7 or 8 in reality) and renamed the files with non-English characters. Most of the files are MP3s, but there's at least one or two other file types as well. I'd like to copy them to a new folder and try a Ubuntu utility to rename and re-organize them. In Windows, this would be straightforward--I'd use the Search function (with no file name specified) to list all of the files in the main folder and its subfolders, then drag and drop them to the new folder. What's the easiest way to accomplish the same thing in Ubuntu? The GUI search doesn't seem to accept wildcard characters, and I don't remember all of the file types I have, so it's not as simple as searching for "mp3". Many thanks!

    Read the article

  • Oracle Database Appliance and Remarketers - A Whole New Opportunity

    - by Martin Morganti
    We have recently had another exciting announcement for the remarketer initiative. The Oracle Database Appliance is now available for resale through your authorized Value Added Distributor. This means that with no fees, no barriers and no upfront commitments, a remarketer can identify and close transactions for the Oracle Database Appliance without having to sign an Oracle contract. In addition, the remarketer is now authorized to sell the Oracle Database Appliance with Oracle Database 11G Enterprise Edition, RAC or RAC One Node software included in the transaction. The availability of the Oracle Database Appliance for remarketers means that a broad base of customers through remarketers can now start engaging with Oracle by taking advantage of this engineered system technology; basically Oracle Hardware and Software Engineered to Work Together to drive incremental revenue. The Oracle Database Appliance is a simple, reliable and affordable way to rapidly deploy a database cluster by plugging in the power; the network and pushing the one button install for the Appliance Manager software. Find out more about the Oracle Database Appliance, including a webinar by Oracle President Mark Hurd, as well as other material to help you better understand the opportunity available to you as remarketers. If you want Oracle to keep you informed about developments in Remarketer complete the free registration, or talk to your local Oracle Remarketer Authorized value added distributors. (Complete list available on the Oracle Remarketer page).

    Read the article

  • "From Russia with Love" - My Oracle Russian Experience

    - by cwarticki
    Two weeks ago, I traveled to Moscow, Russia. I had the pleasure of meeting with many of our Oracle Partners and Customers in the region.  I also worked with our Oracle Russia team throughout the week building many new friendships. The showcase for the week was an Oracle Support Strategy event for our Oracle Partners and Customers.  It was held at the Kateria-City Hotel, Moscow.  The Oracle Marketing team did an amazing job registering 100+ for the event, and nearly 100 were in attendance.           During the event, I spoke about many different topics. Part was a hands-on workshop to personalize your MOS Dashboard and configure Hot-Topics Email alerts.  Customers learned how to subscribe to newsletters and other Oracle information.  It covered a mulitude of Support Best Practices.  Additionally, I presented Platinum Services to the audience and my colleague Kristophe Hermans, from Oracle Belgium spoke on Proactive Support. In addition, I had the distinct privilege to meet one-on-one with our customers representing OJSC VimpelCom, MTC-Rus and Sberbank.  Pictured with me is Valery Yourinsky, Director of Technology Consulting Dept, FORS Distribution (Oracle Platinum Partner) Finally, I spent 2.5 days with my Oracle colleagues from Oracle Russia. They are super, hard-working, dedicated, customer-service professionals. All of them! I owe them all a debt of gratitude. Next time, we meet in Florida - ok? I am very appreciative to all our Oracle partners, customers and colleagues.  Thanks for hosting me and showing me a wonderful time in your country.  I look forward to my return. Sincerely,Chris WartickiGlobal Customer Management

    Read the article

  • Metaphor for task synchronization [closed]

    - by nkint
    I'm looking for a metaphor. A friend of mine taught me to use metaphors from nature, everyday life, math, and use them to design my projects. They can help in creating a better design or better understanding or the problem, and they are cool. Now I'm working on a project with hardware and micro-controllers in C. For convenience, I have decided to use multiple micro-controllers as co-processor units for real-time (the slaves) and a master. This has saved me a lot of headache: I can code the main logic in the master without paying too much attention to super optimizing everything; I don't care if I need some blocking-call; I don't worry about serial communication with the computer. I just send messages to the slaves and they are super fast super in real time. I like my design and it seems to work well. So here are the important concepts that I'm trying capture in the metaphor: hierarchy of processing Not using one big brain but rather several small, distributed brain units using distributed power or resources I'm looking for a good metaphor for this concept of having one unit synchronize the work of all the others. Preferably, the metaphor would come from nature, biology, or zoology.

    Read the article

  • Paranoid Encryption

    - by Lord Jaguar
    Call me paranoid, but I really like to keep my stuff secret, but readily available on the cloud. So, asking this question. How safe and reliable is encryption software (e.g., truecrypt)? The reason I ask is that, what is I encrypt my data today with this software and after a couple of years, the software is gone ! What happens to my encrypted data? Is it equally safe to AES encrypt using 7-zip? Will it provide the same level or equivalent level of encryption as truecrypt or other encryption software? (I agree truecrypt will be better because of the container encryption it gives.) And what happens if 7-zip shuts down after 5 years? I am sorry if I am sounding paranoid, but I am coming back to my original question... Is there any application/software independent encryption? Meaning, can I encrypt with one software and decrypt with another so that I will not be dependent on just one vendor? I want my encryption to depend ONLY on the password and NOT on the encryption program/software? The next question, can I write my own program that does AES/stronger encryption when I give it a passphrase, so that I don't need to depend on third party software for encryption? If yes, which language supports the same? Can someone give me a heads up as to where to look for in case of writing my own encryption program?

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • ArchBeat Top 20 for March 25-31, 2012

    - by Bob Rhubart
    The top 20 most-clicked links as shared via my social networks for the week of March 25-31, 2012. Oracle Cloud Conference: dates and locations worldwide The One Skill All Leaders Should Work On | Scott Edinger BPM in Retail Industry | Sanjeev Sharma Oracle VM: What if you have just 1 HDD system | @yvelikanov Solution for installing the ADF 11.1.1.6.0 Runtimes onto a standalone WLS 10.3.6 | @chriscmuir Beware the 'Facebook Effect' when service-orienting information technology | @JoeMcKendrick Using Oracle VM with Amazon EC2 | @pythianfielding Oracle BPM: Adding an attachment during the Human Task Initialization | Manh-Kiet Yap When Your Influence Is Ineffective | Chris Musselwhite and Tammie Plouffe Oracle Enterprise Pack for Eclipse 12.1.1 update on OTN  A surefire recipe for cloud failure | @DavidLinthicum  IT workers bore brunt of offshoring over past decade: analysis | @JoeMcKendrick Private cloud-public cloud schism is a meaningless distraction | @DavidLinthicum Oracle Systems and Solutions at OpenWorld Tokyo 2012 Dissing Architects, or "What's wrong with the coffee?" | Bob Rhubart Validating an Oracle IDM Environment (including a Fusion Apps build out) | @FusionSecExpert Cookbook: SES and UCM setup | George Maggessy Red Samurai Tool Announcement - MDS Cleaner V2.0 | @AndrejusB OSB/OSR/OER in One Domain - QName violates loader constraints | John Graves Spring to Java EE Migration, Part 3 | @ensode Thought for the Day "Inspire action amongst your comrades by being a model to avoid." — Leon Bambrick

    Read the article

  • Podcast Show Notes: The Fusion Middleware A-Team and the Chronicles of Architecture

    - by Bob Rhubart
    If you pay any attention at all to the Oracle blogosphere you’ve probably seen one of several blogs published by members of a group known as the Oracle Fusion Middleware A-Team. A new blog, The Oracle A-Team Chronicles, was recently launched that combines all of those separate A-Team blogs in one. In this program you’ll meet some of the people behind the A-team and the creation of that new blog. The Conversation Listen to Part 1: Background on the A-Team - When was it formed? What is it’s mission? 54) What are some of the most common challenges A-Team architects encounter in the field? Listen to Part 2 (July 3): The panel discusses the trends - big data, mobile, social, etc - that are having the biggest impact in the field. Listen to Part 3 (July 10): The panelists discuss the analysts, journalists, and other resources they rely on to stay ahead of the curve as the technology evolves, and reveal the last article or blog post they shared with other A-team members. The Panelists Jennifer Briscoe: Senior Director, Oracle Fusion Middleware A-Team Clifford Musante: Lead Architect, Application Integration Architecture A-Team, webmaster of the A-Team Chronicles Mikael Ottosson: Vice President, Oracle Fusion Apps and Fusion Middleware A-Team and Cloud Applications Group Pardha Reddy: Senior director of Oracle Identity Management and a member of the Oracle Fusion Middleware A-team Coming Soon Data Warehousing and Oracle Data Integrator: Guest producer and Oracle ACE Director Gurcan Orhan selected the topic and panelists for this program, which also features Uli Bethke, Michael Rainey, and Oracle ACE Cameron Lackpour. Java and Oracle ADF Mobile: An impromptu roundtable discussion featuring QCon New York 2013 session speakers Doug Clarke, Frederic Desbiens, Stephen Chin, and Reza Rahman. Stay tuned:

    Read the article

  • Displaying possible movement tiles

    - by Ash Blue
    What's the fastest way to highlight all possible movement tiles for a player on a square grid? Players can only move up, down, left, right. Tiles can cost more than one movement, multiple levels are available to move, and players can be larger than one tile. Think of games like Fire Emblem, Front Mission, and XCOM. My first thought was to recursively search for connecting tiles. This quickly demonstrated many shortcomings when blockers, movement costs, and other features were added into the mix. My second thought was to use an A* pathfinding algorithm to check all tiles presumed valid. Presumed valid tiles would come from an algorithm that generates a diamond of tiles from the player's speed (see example here http://jsfiddle.net/truefreestyle/Suww8/9/). Problem is this seems a little slow and expensive. Is there a faster way? Edit: In Lua for Corona SDK, I integrated the following movement generation controller. I've linked to a Gist here because the solution is around 90 lines of code. https://gist.github.com/ashblue/5546009

    Read the article

  • Synchronous vs. asynchronous for publish subscribe communication between JavaScript objects

    - by natlee75
    I implemented the publish subscribe pattern in a JavaScript module to be used by entirely client-side oriented JavaScript objects. This module has nothing to do with client-server communications in any way, shape or form. My question is whether it's better for the publish method in such a module to be synchronous or asynchronous, and why. As a very simplified example let's say I'm building a custom UI for an HTML5 video player widget: One of my modules is the "video" module that contains the VIDEO element and handles the various features and events associated with that element. This would probably have a namespace something like "widgets.player.video." Another is the "controls" module that has the various buttons - play, pause, volume, scrub, fullscreen, etc. This might have a namespace along the lines of "widgets.player.controls." These two modules are children of a parent "player" module ("widgets.player" ??), and as such would have no inherent knowledge of each other when instantiated as children of the "player" object. The "controls" elements would obviously need to be able to effect some changes on the video (click "Play" and the video should play), and vice versa (video's "timeUpdate" event fires and the visual display of the current time in the controls should update). I could tightly couple these modules and pass references to each other, but I'd rather take a more loosely coupled approach by setting up a pubsub type module that both can subscribe to and publish from. SO (thanks for bearing with me) in this kind of a scenario is there an advantage one way or another for synchronous publication versus asynchronous publication? I've seen some solutions posted online that allow for either/or with a boolean flag whereas others automatically do it asynchronously. I haven't personally seen an implementation that just automatically goes with synchronous publication... is this because there's no advantage to it? I know that I can accomplish this with features provided by jQuery, but it seems that there may be too much overhead involved here. The publish subscribe pattern can be implemented with relatively lightweight code designed specifically for this particular purpose so I'd rather go with that then a more general purpose eventing system like jQuery's (which I'll use for more general eventing needs :-).

    Read the article

  • Can't use nvidia card/driver on optimus notebook

    - by Mr. Pixel
    I installed (once again) the latest official nvidia driver for my GT540m on Ubuntu 11.10. Even though everything seems OK with my xorg.conf file (I've manually added BusID "PCI:1:0:0", since lspci shows 01:00.0 for my GPU). The problem is, when I use the xorg.conf file generated by Xorg -configure, Xorg automatically loads the Intel GPU. So I removed everything that was not related to my nvidia card, basically leaving my xorg.conf with one screen and one device (with the nvidia driver and the above-mentioned BusID), and Xorg fails to start. The log says something like "Devices on GT540m [newline] none" And a few lines later, something like "NVIDIA(0) found a screen, but have no device for it". When I don't set the BusID, it doesn't seem to detect my card either. Thank you for any suggestion. PS: If possible, I'd like to avoid bumblebee or any similar "hybrid graphics" solution, last time I tried I ended up reinstalling Ubuntu. Edit: Allow me to clarify the problem. I have a notebook with a GT540m graphics card, and an integrated intel gpu. I want to use the graphics card with full hardware acceleration and its official driver, as I do under windows.

    Read the article

  • How can I save state from script in a multithreaded engine?

    - by Peter Ren
    We are building a multithreaded game engine and we've encountered some problems as described below. The engine have 3 threads in total: script, render, and audio. Each frame, we update these 3 threads simultaneously. As these threads updating themselves, they produce some tasks and put them into a public storage area. As all the threads finish their update, each thread go and copy the tasks for themselves one by one. After all the threads finish their task copying, we make the threads process those tasks and update these threads simultaneously as described before. So this is the general idea of the task schedule part of our engine. Ok, well, all the task schedule part work well, but here's the problem: For the simplest, I'll take Camera as an example: local oldPos = camera:getPosition() -- ( 0, 0, 0 ) camera:setPosition( 1, 1, 1 ) -- Won't work now, cuz the render thread will process the task at the beginning of the next frame local newPos = camera:getPosition() -- Still ( 0, 0, 0 ) So that's the problem: If you intend to change a property of an object in another thread, you have to wait until that thread process this property-changing message. As a result, what you get from the object is still the information in the last frame. So, is there a way to solve this problem? Or are we build the task schedule part in a wrong way? Thanks for your answers :)

    Read the article

  • Set up Work Manager Shutdown Trigger in WebLogic Server 10.3.4 Using WLST

    - by adejuanc
    WebLogic Server's Work Managers provide a way to control work and allocated threads. You can set different scheduling guidelines for different applications, depending on your requirements. There is a default self-tuning Work Manager, but you might want to set up a custom work manager in some circumstances: for example, when you want the server to prioritize one application over another when a response time goal is required, or when a minimum thread constraint is needed to avoid deadlock. The Work Manager Shutdown Trigger is a tool to help with stuck threads in which will do the following: Shut down the Work Manager. Move the application to Admin State (not active). Change the Server instance health state to failed. Example of a Shutdown Trigger set on the config.xml for your domain: <work-manager>   <name>stuckthread_workmanager</name>   <work-manager-shutdown-trigger>     <max-stuck-thread-time>30</max-stuck-thread-time>     <stuck-thread-count>2</stuck-thread-count>   </work-manager-shutdown-trigger> </work-manager> Understand that any misconfiguration on the Work Manager can lead to poor performance on the server. Any changes must be done and tested before going to production. How can one create a WorkManagerShutdownTrigger for WLS 10.3.4 using WLST? You should be able to create a WorkManagerShutdownTrigger using WLST by following these steps: edit() startEdit() cd('/SelfTuning/mydomain/WorkManagers') create('myWM','WorkManager') cd('myWM/WorkManagerShutdownTrigger') create('myWMst','WorkManagerShutdownTrigger') cd('myWMst') ls()

    Read the article

  • Problem with recursive rar archiving non-ascii filenames

    - by AndreasT
    Say I want to create a backup of folder MainFolder's content using rar. The command rar a Backup.rar -r MainFolder does the job. BUT, if a subdirectory contains more than one file named with non-ASCII (?) characters, then only one of them is archived and the others get excluded. For example, consider the following directory hierarchy (MainFolder, A and B are folders; a, b, ? and ? are files) +MainFolder +A -a -b -? -? +B -a -b -a -b -? -? then the command rar a Backup.rar -r MainFolder skips MainFolder/A/? MainFolder/? while rar a Backup.rar -r MainFolder/* still skips MainFolder/A/? Why is it so? Any help is greatly appreciated, thanks! For the record, I already encountered some issues with non-ascii characters (see this question) that other Linux distributions seem not to have. Anyway, I use Lubuntu 12.04, terminal is lxterminal and echo $BASH_VERSION returns 4.2.25(1)-release. rar version is 4.00 beta 3. Another curiosity: right-clicking on the folder and selecting Compress... and then .rar still has the same problem. Other options (zip, tar...) behave correctly.

    Read the article

  • Suggested HTTP REST status code for 'request limit reached'

    - by Andras Zoltan
    I'm putting together a spec for a REST service, part of which will incorporate the ability to throttle users service-wide and on groups of, or on individual, resources. Equally, time-outs for these would be configurable per resource/group/service. I'm just looking through the HTTP 1.1 spec and trying to decide how I will communicate to a client that a request will not be fulfilled because they've reached their limit. Initially I figured that client code 403 - Forbidden was the one, but this, from the spec: Authorization will not help and the request SHOULD NOT be repeated bothered me. It actually appears that 503 - Service Unavailable is a better one to use - since it allows for the communication of a retry time through the use of the Retry-After header. It's possible that in the future I might look to support 'purchasing' more requests via eCommerce (in which case it would be nice if client code 402 - Payment Required had been finalized!) - but I figure that this could equally be squeezed into a 503 response too. Which do you think I should use? Or is there another I've not considered?

    Read the article

  • Compiled code spreadsheet-like cell management? (auto-updating)

    - by proGrammar
    Okay so I am fully aware how spreadsheets manage cells, they build dependency graphs where when one cell changes it tells all the other cells that are dependent on it that it changed. So they can from there update. How they update I think involves either re-evaluating the formulas stored as strings, or re-evaluating the abstract syntax tree which I think is stored differently and might be faster. Something like that. What I'm looking to do is manage a few variables in my code so I don't have to update them in the correct way, which would be a nightmare. But I also want it much faster than spreadsheets. And since I'm not looking for any functionality as great as are in these spreadsheets, I just figured from that thought point that there has to be a way to have a very fast implementation of this functionality. Especially since I don't have to modify cells after compiling unless that would be an option. I'm very new to programming so I have no idea. One example might be to have a code-generator that generates code that does this for me. But I have no clue what the generated code would look like. Specifically, how exactly would variables inform others that they need to update, and what do those variables do to update? I'm looking for any kind of ideas. Programming is not my job but nonetheless I was hoping to have some kind of system like this that would greatly help me with some stuff. Of course I have been programming plenty lately so I can still program. I just don't have the full scope on things. I'm looking for any kind of ideas, thank you very much in advance! Also, please help me with the tags. I know C# and Java mainly and I'm hoping to implement this in either of those languages and I'm hoping this can stay in those tags. Forcing this into some kind of spreadsheet tag wouldn't be accurate.

    Read the article

  • How to experience gradual improvement of knowledge while a newbie does .NET maintenance programming?

    - by amir
    I started my career as a software developer about 6 months ago. This is my first job, and I am the only developer in this company. I gained .NET knowledge by self study and also by doing some university projects. Our systems have old foundations based on an earlier version of .NET, and I'm starting to feel that I am not improving since I am a maintenance programmer here. Everything is old and my manager is not really taking any chances on gradually improving the software. What is your opinion? What should I do? I am newbie and also work hard to find my way through. There is no other developer, not even a senior one to help me here. I need your advice on my situation. And one last thing, can I get a new job with doing maintenance programming? I mean don't managers say that you do not have the experience of developing a new software from scratch? I feel redundant, what do I do?

    Read the article

  • Dealing with institutionalized programmers.

    - by Singleton
    Some times programmers who work in a project for long time tend to get institutionalized. It is difficult to convince them with reasoning. Even if we manage to convince them they will be adamant to take suggestion on board. How do we handle the situation without developing friction in team? Institutionalized in terms of practices. I recently joined in a project where build &release process was made so complicated with unnecessary roadblocks. My suggestion was we can get rid of some of the development overheads(like filling few spreadsheets) just by integrating defect management and version controlling tools (both are IBM-Rational tools integration can be very easy and one-off effort). Also by using tools like Maven & Ant (project involves java and some COTS products) build & release can be simplified and reduce manual errors& intervention. I managed to convince and ready to put efforts for developing proof of concept. But the ‘Senior’ developer is not willing to take it on board. One reason could be the current process makes him valuable in team.

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • Benefits of Masters of Engineering Professional Practice for the lowly (yet aspiring) programmer

    - by Peter Turner
    I've been looking into in state online degree programs 'to fit my busy lifestyle' (i.e. three children, wife and hour and a half commute). One interesting one I've found is that Master of Engineering in Professional Practice. It looks more useful and practical than a MBA in project management. I'll contact the admission dept there about the specifics. But here I'm just asking in general. Do the courses in this degree apply to software engineering/development in even an abstract sense. The university I'm looking at does not have a Software Engineering major in the school of engineering. I'm not interested in architecture astronomy, but I am interested in helping my company succeed and being able to communicate technical information at a high and effective level as well as being able to lead my co-programmers toward a more robust end product. So my multipart question is: What might be the real benefit to me and my brain and How do I convince my boss (the owner of the company, who does do some tuition reimbursement) that just because it doesn't say anything about software that it might still do us some good? Oh, and how do I get past the fact that a masters degree would make me more qualified to be the project manager than... the project manager? (who is my supervisor)

    Read the article

< Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >