Search Results

Search found 502 results on 21 pages for 'outdated'.

Page 18/21 | < Previous Page | 14 15 16 17 18 19 20 21  | Next Page >

  • Optimizing landing pages

    - by Oleg Shaldybin
    In my current project (Rails 2.3) we have a collection of 1.2 million keywords, and each of them is associated with a landing page, which is effectively a search results page for a given keywords. Each of those pages is pretty complicated, so it can take a long time to generate (up to 2 seconds with a moderate load, even longer during traffic spikes, with current hardware). The problem is that 99.9% of visits to those pages are new visits (via search engines), so it doesn't help a lot to cache it on the first visit: it will still be slow for that visit, and the next visit could be in several weeks. I'd really like to make those pages faster, but I don't have too many ideas on how to do it. A couple of things that come to mind: build a cache for all keywords beforehand (with a very long TTL, a month or so). However, building and maintaing this cache can be a real pain, and the search results on the page might be outdated, or even no longer accessible; given the volatile nature of this data, don't try to cache anything at all, and just try to scale out to keep up with traffic. I'd really appreciate any feedback on this problem.

    Read the article

  • Plugin-like architecture in .NET

    - by devoured elysium
    I'm trying to implement a plug-in like application. I know there are already several solution out there but this is just going to be proof of the concept, nothing more. The idea would be to make the application main application almost featureless by default and then let the plugins know about each other, having them have implement all the needed features. A couple of issues arise: I want the plugins at runtime to know about each other through my application. That wouldn't mean that at code-time they couldn't reference other plugin's assemblies so they could use its interfaces, only that plugin-feature initialization should be always through my main app. For example: if I have both plugins X and Y loaded and Y wants to use X's features, it should "register" its interest though my application to use its features. I'd have to have a kind of "dictionary" in my application where I store all the loaded plugins. After registering for interest in my application, plugin Y would get a reference to X so it could use it. Is this a good approach? When coding plugin Y that uses X, I'd need to reference X's assembly, so I can program against its interface. That has the issue of versioning. What if I code my plugin Y against an outdated version of plugin X? Should I always use a "central" place where all assemblies are, having there always the up to date versions of the assemblies? Are there by chance any books out there that specifically deal with these kinds of designs for .NET? Thanks

    Read the article

  • jQuery - Cancel form submit (using "return false"?)?

    - by Nike
    Hey there. I'm running into a bit of trouble while trying to cancel the submit of a form. I've been following this tutorial (even though i'm not making a login script), and it seems to be working for him. Here's my form: <form action="index.php" method="post" name="pForm"> <textarea name="comment" onclick="if(this.value == 'skriv här...') this.value='';" onblur="if(this.value.length == 0) this.value='skriv här...';">skriv här...</textarea> <input class="submit" type="submit" value="Publicera!" name="submit" /> </form> And here's the jquery: $(document).ready(function() { $('form[name=pForm]').submit(function(){ return false; }); }); I've already imported jQuery in the header and i know it's working. My first thought was that it might be outdated, but it's actually "just" a year ago. So do anyone see what's wrong? Thanks in advance. EDIT: From what i've read the easiest and most appropriate way to abort the submit is to return false? But i can't seem to get it working. I've searched the forum and i've found several helpful threads but none of them actually works. I must be screwing something up.

    Read the article

  • Travelling software. Is that a concept?

    - by Bubba88
    Hi! This is barely a sensible question. I would like to ask if there existed a program, which were intended to travel (for example following some physical forces) across the planet, possibly occupying and freeing computational resources/nodes. Literally that means that some agent-based system is just regularly changing it's location and (inevitably to some extent) configuration. An example would be: suppose you have external sensors, and free computers - nodes - across the space; would it make sense to self-replicate agents to follow the initializers from sensors, but in such restrictive manner that the computation is only localized at where the physical business is going on. I want to stress that this question is just for 'theoretical' fun, cause I cannot see any practical benefits of the restrictions mentioned, apart from the optimization of 'outdated' (outplaced?) agent disposal. But maybe it could be of some interest. Thank you! EDIT: It's obvious that a virus is fitting example, although the deletion of such agents is rarely of concern of the developers. More precisely, I'm interested in 'travelling' software - that is, when the count (or at least order) of the agents is kind of constant, and it's just the whole system who travels.

    Read the article

  • How to Sync Any Browser’s Bookmarks With Your iPad or iPhone

    - by Chris Hoffman
    Apple makes it easy to synchronize bookmarks between the Safari browser on a Mac and the Safari browser on iOS, but you don’t have to use Safari — or a Mac — to sync your bookmarks back and forth. You can do this with any browser. Whether you’re using Chrome, Firefox, or even Internet Explorer, there’s a way to sync your browser bookmarks so you can access your same bookmarks on your iPad. Safari on a Mac Apple’s iCloud service is the officially supported way to sync data with your iPad or iPhone. It’s included on Macs, but Apple also offers similar iCloud bookmark syncing features for Windows. On a Mac, this should be enabled by default. To check whether it’s enabled, you can launch the System Preferences panel on your Mac, open the iCloud preferences panel, and ensure the Safari option is checked. If you’re using Safari on Windows — well, you shouldn’t be. Apple is no longer updating Safari for Windows. iCloud allows you to synchronize bookmarks between other browsers on your Windows system and Safari on your iOS device, so Safari isn’t necessary. Internet Explorer, Firefox, or Chrome via iCloud To get started, download Apple’s iCloud Control Panel application for Windows and install it. Launch the iCloud Control Panel and log in with the same iCloud account (Apple ID) you use on your iPad or iPhone. You’ll be able to enable Bookmark syncing with Internet Explorer, Firefox, or Chrome. Click the Options button to select the browser you want to synchronize bookmarks with. (Note that bookmarks are called “favorites” in Internet Explorer.) You’ll be able to access your synced bookmarks in the Safari browser on your iPad or iPhone, and they’ll sync back and forth automatically over the Internet. Google Chrome Sync Google Chrome also has its own built-in sync feature and Google provides an official Chrome app for iPad and iPhone. If you’re a Chrome user, you can set up Chrome Sync on your desktop version of Chrome — you should already have this enabled if you have logged into your Chrome browser. You can check if this Chrome Sync is enabled by opening Chrome’s settings screen and seeing whether you’re signed in. Click the Advanced sync settings button and ensure bookmark syncing is enabled. Once you have Chrome Sync set up, you can install the Chrome app from the App Store and sign in with the same Google account. Your bookmarks, as well as other data like your open browser tabs, will automatically sync. This can be a better solution because the Chrome browser is available for so many platforms and you gain the ability to synchronize other browser data, such as your open browser tabs, between your devices. Unfortunately, the Chrome browser is slower than Apple’s own Safari browser on iPad and iPhone because of the way Apple limits third-party browsers, so using it involves a trade-off. Manual Bookmark Sync in iTunes iTunes also allows you to sync bookmarks between your computer and your iPad or iPhone. It does this the old-fashioned way, by initiating a manual sync when your device is plugged in via USB. To access this option, connect your device to your computer, select the device in iTunes, and click the Info tab. This is the more outdated way of synchronizing your bookmarks. This feature may be useful if you want to create a one-time copy of your bookmarks from your PC, but it’s nowhere near ideal for regular syncing. You don’t have to use this feature, just as you really don’t have to use iTunes anymore. In fact, this option is unavailable if you’ve set up iCloud syncing in iTunes. After you set up bookmark syncing via iCloud or Chrome Sync, bookmarks will sync immediately after you save, remove, or edit them.     

    Read the article

  • The Hunger Games for Aspiring IT Professionals

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} It seems that no one can escape the buzz around Hunger Games. And who could? Stephen King said it best in his review when he referred to the Collins’ novel as “a violent, jarring speed-rap of a novel that generates nearly constant suspense and may also generate a fair amount of controversy”. So what’s the tie in for IT? Let’s leave the dystopia of District 12 and come back to today’s reality. This is the world of radical IT paradigm shifts that haven’t been seen since Java was introduced in 1995. Everything you learned in school is probably outdated as of Friday. And everything you learned on Friday will probably change when you get to work on Monday. Nevertheless, we’re eager, we’re aspiring, we’re hungry to learn. While the challenges upon us may not rival the venomous bees (or ‘tracker jackers’) seen in this blockbuster, there are certainly obstacles to be found. In preparation, I leave you two pieces of advice - aside from avoiding werewolves… Learn the Cloud If you had asked me what to learn in 1995, I would have said, “Go learn Java”. But now my advice is “Go learn Java and then learn Cloud”. Cloud computing and Java go hand in hand. This is especially true for Oracle’s own Public Cloud which uses Java (via WebLogic 12c) as well as Oracle Database at its core foundation. Understanding the connotations of elasticity, scale, virtualization, and multi-tenancy, (to name just a few) requires a strong foundation in computer science and especially Java to get it right. Without Java, the Cloud is nothing more than a brittle application meagerly deployed on the internet. Get Social and Actively Participate And at all levels. Socializing your ideas internally is dreadfully important. And this means socializing and communicating your good ideas to lines of business, to architects, business analysts, developers, DBAs and Operations. But don’t forget to go external. Stay current by being on the lookout for blogs, tweets, webcasts, papers, podcasts and videos for your technology area. Be not just a subscriber but a participant in these channels as well. Attend industry and vendor sponsored events to learn from the experts – and seek out opportunities to stay connected with those that are smarter than you. You’ll gain more understanding if you participate actively. At the same time you’ll make friends (and allies) and you’ll be glad you did. Tell help you get social and actively participate [while learning the Cloud] here are a couple of pointers for you: See our website on Cloud and Fusion Middleware Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook Find us at one of our key events Meanwhile, happy IT hunger games!

    Read the article

  • C# Item system design approach, should I use abstract classes, interfaces or virutals?

    - by vexe
    I'm working on a Resident Evil 1/2/3/0/Remake type of game. Currently I've done a big part of the inventory system (here's a link if you wanna see my inventory, pretty outdated, added a lot of features and made a lot of enhancements) Now I'm thinking about how to approach the items system, If you've played any Resident Evil game or any of its likes you should be familiar with what I'm trying to achieve. Here's a very simple category I made for the items: So you have different items, with different operations you could perform on them, there are usable items that you could use, like for example herbs and first aid kits that 'using' them would affect your health, keys to unlock doors, and equipable items that you could 'equip' like weapons. Also, you can 'combine' two items together to get new one, like for example mixing a green and red herb would give you a new type of herb, or combining a lighter with a paper, would give you a burnt paper, or ammo with a gun, would reload the gun or something. etc. You know the usual RE drill. Not all items are 'transformable', in that, for example: lighter + paper = burnt paper (it's the paper that 'transforms' to burnt paper and not the lighter, the lighter is not transformable it will remain as it is) green herb + red herb = newHerb1 (both herbs will vanish and transform to this new type of herb) ammo + gun = reload gun (ammo state will remain as it is, it won't change but it will just decrease, nothing will happen to the gun it just gets reloaded) Also a key note to remember is that you can't just combine items randomly, each item has a 'mating' item(s). So to sum up, different items, and different operations on them. The question is, how to approach this, design-wise? I've been learning about interfaces, but it just doesn't quite get into my head, I mean, why not just use classes with the good old inheritance? I know the technical details of interfaces and that the cool thing about them is that they don't require an inheritance chain, but I just can't see how to use them properly, that is, if they were the right thing to use here. So should I go with just classes and inheritance? just like in the tree I showed you? or should I think more about how to use interfaces? (IUsable, IEquipable, ITransformable) - why not just use classes UsableItem, Equipable item, TransformableItem? I want something that won't give me headaches in the long run, something resilient/flexible to future changes. I'm OK using classes, but I smell something better here. The way I'm thinking is to possibly use both inheritance and interfaces, so that you have a branch like this: item - equipable - weapon. but then again, the weapon has methods like 'reload' 'examine' 'equip' some of them 'combine' so I'm thinking to make weapon implement ICombinable?... not all items get used the same way, using herbs will increase your health, using a key will open a door, so IUsable maybe? Should I use a big database (XML for example) for all the items, items names, mates, nRowsReq, nColsReq, etc? Thanks so much for your answers in advanced, note that demo 3 is coming after I'm done with items :D

    Read the article

  • Knowledge Management Feedback

    - by Robert Schweighardt
    Did you know that you can provide feedback on Knowledge Management (KM) articles? It's nice to read a technical article that is well-written, the grammar and spelling are correct, the information is up to date, concise, to the point, easy to understand and it flows from one paragraph to another.  And though we always strive for a well-written article, it doesn't always come out that way. Knowledge Management articles are written by Oracle Support Engineers and we welcome your feedback.  Providing feedback helps to improve Oracle's Knowledge Base.  If you're reading a KM article and you have a comment, please let us know about it.  Maybe it's just to fix a spelling or grammatical error.  Maybe there's a broken link that needs to be fixed.  Maybe it's a suggestion to provide additional information.  Maybe the article contains incorrect information.  Maybe some information in the article is outdated.  Maybe something is not clear in the article.  Whatever it is, we want to hear about it.  We value your input! When you provide feedback it goes directly to the owner of the article.  The owner carefully reviews the comment and decides whether or not to implement it.  Most comments are implemented and we strive to implement them within a week!  For those comments that are not implemented, there is normally a good reason.  It may not be feasible to implement the suggestion or the suggestion may not be correct.  We don't take the decision lightly! So how do you provide feedback? Providing feedback on a KM article depends on whether you're a customer or an Oracle Employee. Customer 1. In the upper right hand corner of the article, click on the little +/- Rate this document icon: Note: The grayed out Comments (0) link will only show a number when there are open comments that are still being evaluated. 2. In the Article Rating window, complete as many of the following optional fields as you like and then click the Send Rating button: Rate the article as Excellent, Good or Poor Specify whether the article helped you or not Specify the ease of finding the article Provide whatever comments you have Employee The interface for Oracle Employees is a little bit different, there are more options. 1. The +/- Rate this document icon is also available to employees and is identical to what the customers have.  Please see Customer section above. 2. The Show document comments link shows all comments that have ever been submitted for the article 3. Employees have an additional way to submit a comment.  Click on the little + Add Comment icon: 4. Fill out the Add Comment fields and click the Add Comment button: We look forward to your feedback!

    Read the article

  • Praise for Europe's Smart Metering & Conservation Efforts

    - by caroline.yu
    Recently, a writer at the Home Energy Team praised the UK for its efforts towards smart metering and energy conservation, with an article entitled UK Blazing A Trail With Smart Metering At Home? The article highlighted that the Department of Energy and Climate Change has announced that smart metering will be introduced in the next decade and that all UK households will have smart meters by the year 2020. In fact, the UK is not the only country striving to achieve carbon reduction targets, as many of its European counterparts have begun to take positive steps towards tackling the issue of energy conservation by implementing innovative new metering and billing technologies as well as promoting alternative energy solutions, such as wind and solar power. Since 1997, the states of the European Union, including France, Germany and Spain, have been working towards achieving a target of 12 percent renewable energy electricity by 2010. Germany in particular has made a significant achievement so far, having surpassed the target early in 2007. This success is largely due to the German Renewable Energy Act (EEG), which promoted the use of renewable energy. Recently, analysis from the European Wind Energy Association (EWEA) found that 21 of the EU Member States are meeting or exceeding their national target to achieve 20 percent renewable energy by 2020. However, six states - Belgium, Italy, Luxembourg, Malta, Bulgaria and Denmark - say they will not manage to reach their target through domestic action alone. Bulgaria and Denmark believe that with fresh national initiatives they could meet or exceed their targets, but others, including Italy, may need to import renewable energy from neighboring non-EU countries. Top achievers, according to the EWEA report, are Spain, which believes its renewable energy will reach 22.7 percent by 2020, as well as Germany, Estonia, Greece, Ireland, Poland, Slovakia and Sweden, who will all exceed their targets. "Importantly, the way that this renewable energy is controlled and distributed must be addressed in order to ensure its success," said Bastian Fischer, vice president and general manager EMEA, Oracle Utilities. "A smart gird infrastructure can enable utilities to deal with load distribution in times of increased need and ensure power is always available from these means. A smart grid also underpins the success of metering and billing technologies, such as smart metering, and allows utilities to deal with increased usage data and provide accurate billing." Outside of Europe, Australia has made significant steps towards improving water conservation. The Australian Department of Sustainability and Environment took some of the recent advancements made in the energy sector, including new metering and billing solutions, and applied them to the water industry, enhancing customer service and reducing consumption as a result. The adoption of smart metering in Europe is mainly driven by regulation, but significant technological improvements are being made the world over to change the way we use all kinds of energy. However, the developing markets are lagging behind. One of the primary reasons for this is the lack of infrastructure in place to use as a foundation for setting up energy-saving solutions, which is slowing the adoption of technologies such as smart meters. However, these countries do benefit from fewer outdated infrastructure and legacy systems, which is often cited by others as a difficult barrier to deploying new solutions. As a result, some countries should find new technologies easier to implement and adapt to in the immediate future, without this roadblock.

    Read the article

  • A (Late) Meme Monday Post: On SQLFamily

    - by Argenis
      Yesterday a member of the SQL community who I deeply admire sent me a DM on Twitter asking whether I had done a SQLFamily post for Thomas LaRock’s (blog|@SQLRockstar) Meme Monday for November. I replied that I did not, and I regretted not having done so. A subtle DM followed my response: “Get on it, you have all week”. And indeed I must. So here’s an attempt to express some of my feelings on a community that has catapulted my career like nothing else before I embraced it. Nanos Gigantium Humeris Insidentes I stand on the shoulders of giants. My SQLFamily has given me support at all levels. Professionally and personally. There is never a lack of will to help and provide advice to others in this community. And I do my best to help. On #SQLHelp on Twitter, via email, or even on the phone. I expect no retribution, because I know that when and if I do run into problems, my SQLFamily will be there for me. I have met some of the most humble, dedicated and most professional people in the SQL community. And some of them have pretty big titles: MVPs, MCMs, Regional Mentors, and even leaders of PASS, SQLCAT members, and even PMs and Devs on the SQL Server team. All are welcome, and that includes YOU! I have also met some people that are rather reserved and don’t participate as much in the community, for whatever reason. Be as it may, let it be know to all that we are a very welcoming community – heck, some of my closest friends and people I can count on in the community have completely opposite political views. We share one goal: to get better and help others get better. Even if you are a lurker – my hope is that one day you’ll decide to give back some of what you have learned. You have to take it to the next level On one of my previous jobs as an IT Supervisor I used to tell my team all the time about the benefits of continuous education and self-driven learning. Shortly after I left that job, the company went bankrupt and some of my staff got laid off – some without any severance pay whatsoever. I eventually found out that some of them had a really hard time finding another job, because their skills were simply outdated. They had become stale professionals. Don’t be one of them. If you don’t take advantage of these learning resources, somebody else will – and that person has an advantage over you when applying for that awesome job position that got opened. There’s a severe shortage of good DBAs and DB Devs out there. What’s your excuse for not being excellent? Even if your knowledge of SQL Server is at the beginner level, really – you have no excuse to get better. Just go to SQLUniversity and learn from there. Don’t get stale! Thank You To all of you in the SQL community who put so much time and energy into helping others, my deepest gratitude to you. I can’t wait to meet you all again at the next event and share our SQL stories over a pint of beer (or a shot of Jaeger) Cheers! -Argenis

    Read the article

  • Part 7: EBS Modifications and Flagged Files in R12

    - by volker.eckardt(at)oracle.com
    Let me, based on my previous blog, explain the procedure of flagged files a bit better and facilitate the same with screenshots. Flagged files is a concept within the Oracle eBusiness Suite (EBS) release 12, where you flag a standard deployment file, let’s say a Forms file, a Package or a Java class file. When you run the patch analyse, the list of flagged files will be checked and in case one of these files gets patched, the analyse report will tell you. Note: This functionality is also available in release 11, here it is implemented and known as “applcust.txt”. You can flag as many files as you want, in whatever relationship they are with your customizations. In addition to the flag itself you can add a comment. You should use this comment to point to your customization reference (here XXAR_RPT_066 or XXAP_CUST_030). Consider the following two cases: You have created your own report, based on a standard report. In this case you will flag the report file itself, and the key views used. When a patch updates one of these files, you will be informed and can initiate a proper review and testing. (ex.: first line for ARXCTA.rdf) You have created an extensive personalization and because it is business critical you like to be informed if the page definition gets updated. In this case you register the PG.xml file as flagged file. (ex.: second line below for CreateExtBankAcctPG.xml) The menu path to register flagged files is the following: (R) System Administrator > (M) Oracle Applications Manager > Site Map > Maintenance > Register Flagged Files     Your DBA should now run the Patch Analyse every time he is going to apply a new patch. (R) System Administrator > (M) Oracle Applications Manager > Patch Wizard > Task “Recommend/Analyze Patches” The screenshot above shows the impact summary. For this blog entry the number “2” titled “Flagged Files Changed“ is in our focus. When you click the “2” you will get a similar screen like the first in this blog, showing you exactly the files which will get patched if you continue and apply this patch in this environment right now. Note: It is also shown that just 20% of all patch files will get applied. This situation might be different in case your environments are on a different patch level. For sure also the customization impact might then be different. The flagging step can be done directly in the Oracle Applications Manager.  Our developers are responsible for. To transport such a flag+comment we use a FNDLOAD script. It is suggested to put the flagged files data file directly into your CEMLI patch. Herewith the flagged files registration will be executed right at the same time when the patch gets applied. Process Steps: Developer: Builds CEMLI Reviews code and identifies key standard objects referenced Determines standard object files and flags them Creates FNDLOAD file and adds the same to the CEMLI patch DBA: Executes for every new Oracle standard patch the patch analyse in a representative environment Checks and retrieves the flagged files and comments Sends flagged file list back to development team for analyse / retest Developer: Analyses / Updates / Retests effected CEMLIs Prerequisite: The patch analyse has to be executed in an environment where flagged files have been registered. (If you run the patch analyse in a vanilla or outdated environment (compared to your PROD), the analyse will not be so helpful!) When to start with Flagged files? Start right now utilizing this feature. It is an invest to improve the production stability and fulfil your SLA!   Summary Flagged Files is a very helpful EBS R12 technique when analysing patches. Implement a procedure within your development process to maintain such flags. Let the DBA run the patch analyse in an environment with a similar patch and customization level as your current production.   Related Links: EBS Patching Procedures - Chapter 2-13 - Registered Flagged Files

    Read the article

  • Communication between state machines with hidden transitions

    - by slartibartfast
    The question emerged for me in embedded programming but I think it can be applied to quite a number of general networking situations e.g. when a communication partner fails. Assume we have an application logic (a program) running on a computer and a gadget connected to that computer via e.g. a serial interface like RS232. The gadget has a red/green/blue LED and a button which disables the LED. The LEDs color can be driven by software commands over the serial interface and the state (red/green/blue/off) is read back and causes a reaction in the application logic. Asynchronous behaviour of the application logic with regard to the LED color down to a certain delay (depending on the execution cycle of the application) is tolerated. What we essentially have is a resource (the LED) which can not be reserved and handled atomically by software because the (organic) user can at any time press the button to interfere/break the software attempt to switch the LED color. Stripping this example from its physical outfit I dare to say that we have two communicating state machines A (application logic) and G (gadget) where G executes state changes unbeknownst to A (and also the other way round, but this is not significant in our example) and only A can be modified at a reasonable price. A needs to see the reaction and state of G in one piece of information which may be (slightly) outdated but not inconsistent with respect to the short time window when this information was generated on the side of G. What I am looking for is a concise method to handle such a situation in embedded software (i.e. no layer/framework like CORBA etc. available). A programming technique which is able to map the complete behaviour of both participants on classical interfaces of a classical programming language (C in this case). To complicate matters (or rather, to generalize), a simple high frequency communication cycle of A to G and back (IOW: A is rapidly polling G) is out of focus because of technical restrictions (delay of serial com, A not always active, etc.). What I currently see as a general solution is: the application logic A as one thread of execution an adapter object (proxy) PG (presenting G inside the computer), together with the serial driver as another thread a communication object between the two (A and PG) which is transactionally safe to exchange The two execution contexts (threads) on the computer may be multi-core or just interrupt driven or tasks in an RTOS. The com object contains the following data: suspected state (written by A): effectively a member of the power set of states in G (in our case: red, green, blue, off, red_or_green, red_or_blue, red_or_off...etc.) command data (written by A): test_if_off, switch_to_red, switch_to_green, switch_to_blue operation status (written by PG): operation_pending, success, wrong_state, link_broken new state (written by PG): red, green, blue, off The idea of the com object is that A writes whichever (set of) state it thinks G is in, together with a command. (Example: suspected state="red_or_green", command: "switch_to_blue") Notice that the commands issued by A will not work if the user has switched off the LED and A needs to know this. PG will pick up such a com object and try to send the command to G, receive its answer (or a timeout) and set the operation status and new state accordingly. A will take back the oject once it is no longer at operation_pending and can react to the outcome. The com object could be separated of course (into two objects, one for each direction) but I think it is convenient in nearly all instances to have the command close to the result. I would like to have major flaws pointed out or hear an entirely different view on such a situation.

    Read the article

  • Pro/con of using Angular directives for complex form validation/ GUI manipulation

    - by tengen
    I am building a new SPA front end to replace an existing enterprise's legacy hodgepodge of systems that are outdated and in need of updating. I am new to angular, and wanted to see if the community could give me some perspective. I'll state my problem, and then ask my question. I have to generate several series of check boxes based on data from a .js include, with data like this: $scope.fieldMappings.investmentObjectiveMap = [ {'id':"CAPITAL PRESERVATION", 'name':"Capital Preservation"}, {'id':"STABLE", 'name':"Moderate"}, {'id':"BALANCED", 'name':"Moderate Growth"}, // etc {'id':"NONE", 'name':"None"} ]; The checkboxes are created using an ng-repeat, like this: <div ng-repeat="investmentObjective in fieldMappings.investmentObjectiveMap"> ... </div> However, I needed the values represented by the checkboxes to map to a different model (not just 2-way-bound to the fieldmappings object). To accomplish this, I created a directive, which accepts a destination array destarray which is eventually mapped to the model. I also know I need to handle some very specific gui controls, such as unchecking "None" if anything else gets checked, or checking "None" if everything else gets unchecked. Also, "None" won't be an option in every group of checkboxes, so the directive needs to be generic enough to accept a validation function that can fiddle with the checked state of the checkbox group's inputs based on what's already clicked, but smart enough not to break if there is no option called "NONE". I started to do that by adding an ng-click which invoked a function in the controller, but in looking around Stack Overflow, I read people saying that its bad to put DOM manipulation code inside your controller - it should go in directives. So do I need another directive? So far: (html): <input my-checkbox-group type="checkbox" fieldobj="investmentObjective" ng-click="validationfunc()" validationfunc="clearOnNone()" destarray="investor.investmentObjective" /> Directive code: .directive("myCheckboxGroup", function () { return { restrict: "A", scope: { destarray: "=", // the source of all the checkbox values fieldobj: "=", // the array the values came from validationfunc: "&" // the function to be called for validation (optional) }, link: function (scope, elem, attrs) { if (scope.destarray.indexOf(scope.fieldobj.id) !== -1) { elem[0].checked = true; } elem.bind('click', function () { var index = scope.destarray.indexOf(scope.fieldobj.id); if (elem[0].checked) { if (index === -1) { scope.destarray.push(scope.fieldobj.id); } } else { if (index !== -1) { scope.destarray.splice(index, 1); } } }); } }; }) .js controller snippet: .controller( 'SuitabilityCtrl', ['$scope', function ( $scope ) { $scope.clearOnNone = function() { // naughty jQuery DOM manipulation code that // looks at checkboxes and checks/unchecks as needed }; The above code is done and works fine, except the naughty jquery code in clearOnNone(), which is why I wrote this question. And here is my question: after ALL this, I think to myself - I could be done already if I just manually handled all this GUI logic and validation junk with jQuery written in my controller. At what point does it become foolish to write these complicated directives that future developers will have to puzzle over more than if I had just written jQuery code that 99% of us would understand with a glance? How do other developers draw the line? I see this all over Stack Overflow. For example, this question seems like it could be answered with a dozen lines of straightforward jQuery, yet he has opted to do it the angular way, with a directive and a partial... it seems like a lot of work for a simple problem. Specifically, I suppose I would like to know: how SHOULD I be writing the code that checks whether "None" has been selected (if it exists as an option in this group of checkboxes), and then check/uncheck the other boxes accordingly? A more complex directive? I can't believe I'm the only developer that is having to implement code that is more complex than needed just to satisfy an opinionated framework.

    Read the article

  • BIOS upgrade lowers CPU temperature

    - by N.N.
    Setup I've got a system with an Asus P8Z68-V PRO motherboard and an Intel Core i7-2600K CPU running at stock speed (no overlocking) which I cool with a Noctua NH-U12P. On the heatsink I've got the two included fans connected via the included Low-Noise Adapters (L.N.A.) 1100 RPM, 16.9 dB(A). In the BIOS settings I've set the CPU and chassis fan profile to silent. Issue Yesterday I upgraded from BIOS version 0501 to 0606. After the upgrade I checked the temperatures in the BIOS monitor and was surprised to see that the CPU temperature was slightly ~30°C. Before the upgrade the CPU temperature was ~50°C with the same BIOS settings (see the following heading for details on temperatures). How can this be? It seems a bit odd that a BIOS upgrade can lower the CPU temperature by 20°C and it also seems odd that the CPU temperature is lower than the chassis temperature. Temperatures When I've checked temperatures the room temperature has been ~23°C. I haven't changed the placement of the computer nor the hardware or cooling setup between BIOS versions. BIOS version 0501 BIOS monitor: CPU: ~50°C Chassis: ~33°C I haven't got any temperature measures from lm-sensors or the like for version 0501 because I only discovered the issue after upgrading to version 0606 and the BIOS updater utility won't let me downgrade to version 0501 (it says "outdated image" when I try to load version 0501). BIOS version 0606 BIOS monitor: CPU: ~30°C Chassis: ~33°C lm-sensors in Ubuntu 11.04 Desktop 64-bit (sudo sensors after an uptime of 4 h 52 min and a load average of 0.22, 0.18, 0.15): coretemp-isa-0000 Adapter: ISA adapter Core 0: +32.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0001 Adapter: ISA adapter Core 1: +35.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0002 Adapter: ISA adapter Core 2: +29.0°C (high = +80.0°C, crit = +98.0°C) coretemp-isa-0003 Adapter: ISA adapter Core 3: +36.0°C (high = +80.0°C, crit = +98.0°C) The BIOS monitor temperatures was checked directly after the lm-sensors temperatures was checked. BIOS version 0706, 0801, 1101 and 3203 I get the same kind of temperatures both in the BIOS monitor and with lm-sensors in BIOS version 0706, 0801, 1101 and 3203 as in 0606. Information from Asus The 0606 changelog mentions nothing explicitly about CPU temperature (but item 3., as indicated by sidran32, might affect temperatures): P8Z68-V PRO 0606 BIOS with IRST 10.6.0.1002 Enable the support of Intel Rapid Storage Technology version 10.6.0.1002 Release Improve DRAM compatibility Improve System stability Improve compatibility with some Raid card model Increase IGD share memory size to 512MB However the following FAQ might give a hint: FAQs I find that the CPU temperature reading in BIOS is about 10~20 degrees centigrade hotter than the reading in OS. Is it normal? Page Tools Solution That is normal as BIOS does not send idle command to the CPU, making most of the power saving features useless. You should be getting similar reading if you disable EIST/C1E/CPU C3 Report/CPU C6 Report in BIOS.

    Read the article

  • How to change radius of rounded rectangle in Photoshop

    - by MattDiPasquale
    At this point, I'm thinking of just using CSS 3, esp. since I'm a programmer, but I'd like to do this with Photoshop because I think it's nicer since I'm working with images anyway, among other reasons... Before I move on, my first question is: Is there a place like SuperUser for designers (or for Photoshop-like or questions)? What I Want: I want the icons on http://www.mattdipasquale.com/ to look like those on http://about.me/mattdipasquale. About.me has an outdated Twitter icon and does not have icons for GitHub, StackOverflow, etc. So, although I like the look of their icons, I want to be able to create these icons myself instead of using their versions. What I Have: I have different iphone icons, like the Facebook iPhone icon, Twitter iPhone icon, etc., that I got from iTunes, using Firebug to find the URL of the background image. I opened them up in Photoshop and pressed option + command + i to reduce the image size to 32px x 32px with Bicubic Sharper (best for reduction). I now have a square icon layer. Closing the Gap: In addition to the icon layer, I want to have a clipping-mask layer that will apply the 5px rounded-corners, 1px stroke, and 1px bevel. (Note: I just want to apply effects to the edges of the icon because the gloss and other effects are already encoded in the iTunes image. Also, I'm just guessing about the pixel values, but I want it to look good, like the icons on about.me.) What settings should I use for the blend options to make the icons look good, like iphone icons or those used by about.me? Why a Clipping Mask? The reason I want to use a clipping mask is that I want ease of reproducibility. I want to be able to apply the same styling to other square icon layers by simply replacing the square icon layer and then saving for web. If there is a better way to achieve such ease of reproducibility, please suggest it. I've seen Photoshop iPhone icon templates, but I couldn't figure out how to use them with my own images. Thanks! Matt

    Read the article

  • How to redirect http requests to http (nginx)

    - by spuder
    There appear to be many questions and guides out there that instruct how to setup nginx to redirect http requests to https. Many are outdated, or just flat out wrong. server { listen *:80; server_name <%= @fqdn %>; #root /nowhere; #rewrite ^ https://$server_name$request_uri? permanent; #rewrite ^ https://$server_name$request_uri permanent; #return 301 https://$server_name$request_uri; #return 301 http://$server_name$request_uri; #return 301 http://192.168.33.10$request_uri; return 301 http://$host$request_uri; } server { listen *:443 ssl default_server; server_name <%= @fqdn %>; server_tokens off; root <%= @git_home %>/gitlab/public; ssl on; ssl_certificate <%= @gitlab_ssl_cert %>; ssl_certificate_key <%= @gitlab_ssl_key %>; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers AES:HIGH:!ADH:!MDF; ssl_prefer_server_ciphers on; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is requested, # then the proxy pass the request to the upsteam (gitlab puma) location @gitlab { proxy_read_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_connect_timeout 300; # https://github.com/gitlabhq/gitlabhq/issues/694 proxy_redirect off; ect.... I've restarted after every configuration change, and yet I still only get the 'Welcome to nginx' page when visiting http://192.168.33.10. whereas https://192.168.33.10 works perfectly. Why will nginx still not redirect http requests to https? tailf /var/log/nginx/access.log 192.168.33.1 - - [22/Oct/2013:03:41:39 +0000] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" 192.168.33.1 - - [22/Oct/2013:03:44:43 +0000] "GET / HTTP/1.1" 200 133 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:24.0) Gecko/20100101 Firefox/24.0" tailf /var/log/nginx/gitlab_error.lob 2013/10/22 02:29:14 [crit] 27226#0: *1 connect() to unix:/home/git/gitlab/tmp/sockets/gitlab.socket failed (2: No such file or directory) while connecting to upstream, client: 192.168.33.1, server: gitlab.localdomain, request: "GET / HTTP/1.1", upstream: "http://unix:/home/git/gitlab/tmp/sockets/gitlab.socket:/", host: "192.168.33.10" Resources http://wiki.nginx.org/Pitfalls How to make nginx redirect How to force or redirect to SSL in nginx? nginx ssl redirect Nginx & Https Redirection https://www.tinywp.in/301-redirect-wordpress/ How to force or redirect to SSL in nginx?

    Read the article

  • Vagrant reporting VirtualBox guest additions out of date

    - by DTest
    Fairly new to Vagrant, so bear with me if I don't understand the process. I downloaded a CentOS box off http://www.vagrantbox.es/ Started it up running VirtualBox 4.2.4 and got this message: [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.0.8 VirtualBox Version: 4.2.4 So I used the vbguest plugin to update the guest additions, then repackaged the box as suggested. Having replaced the old box and loading it up I get the same message about guest additions being outdated, but vbguest reports that they are up to date (the automatic vbguest update is disabled in my Vagrantfile): Vagrant::Config.run do |config| config.vm.box = "centos56_64" config.vbguest.auto_update = false config.vbguest.no_remote = true end And the commands: dtest$ vagrant up [default] Importing base box 'centos56_64'... [default] The guest additions on this VM do not match the install version of VirtualBox! This may cause things such as forwarded ports, shared folders, and more to not work properly. If any of those things fail on this machine, please update the guest additions and repackage the box. Guest Additions Version: 4.0.8 VirtualBox Version: 4.2.4 [default] Matching MAC address for NAT networking... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- 22 => 2222 (adapter 1) [default] Creating shared folders metadata... [default] Clearing any previously set network interfaces... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] VM booted and ready for use! [default] Mounting shared folders... [default] -- v-root: /vagrant dtest$ vagrant vbguest --no-install [default] Detected Virtualbox Guest Additions 4.2.4 --- OK. [default] Virtualbox Guest Additions on host: 4.2.4 - guest's version is 4.2.4 Since they appear to be updated after an install, I could ignore the message. But is it possible to get rid of it?

    Read the article

  • OpenVPN multiple servers on the same subnet, high availability

    - by andre
    Hey everyone. Let me start by saying that my Linux experience isn't super awesome but I can usually find my way around things easily. Over at work we have an OpenVPN setup that's been due for some improvement for a while now. The main server (tap mode) runs in our office, behind a rather slow DSL connection. The main problem is that, since I'm usually out of the office, every time I want to access something on the virtual network I have to go through that server to get anywhere else. We have two servers up on 100 Mbit connections that we use for development and production purposes, about 3 more servers in the office (one of them behind a different T1 line for VOIP) and about two dozen clients who use the network on a daily basis from various locations. We've had situations where network routing (outside of our control) would not allow people to reach our main OpenVPN server whilst the other locations were connectable. Also any time someone outside the office wants to fetch something from any of the servers (say, a 500 MB code repository), a whopping 20 KB/s download speed is just unacceptable these days (did I mention slow DSL? ok). We had to implement traffic shaping on this server since maxing out this connection was fairly trivial. I had the thought of running two (or more) OpenVPN servers in the network. These would have to have the same subnet though, as our application relies on virtual network's IP addresses for some of its core functionality. The clients would also preferably retain the same IP addresses but that's not vital. For simplicity, lets call the current server office and the second server I'm setting up, cloud. Call the server on the T1 phone. This proved to be rather complex because as soon as I connect to cloud, I cannot see office. Any routes to a server that would go through office also do not work while I'm connected to cloud (no ping, nothing) and vice-versa. There's no rules for iptables that would be blocking the traffic either. Recently I came across this article on linuxjournal but the solution they provide seems to only cover the use of two servers and somewhat outdated (can't even find much documentation, their wiki is offline). They also state that adding more servers would be a complex task. Ideally I would like to keep the existing server office running the virtual network and also run the OpenVPN daemon on the cloud and phone servers (100 Mbit and very reliable connection, respectively) so that we're on safe ground in case of a hardware failure, DSL failure, etc. So, in essence, I'm looking for a highly available OpenVPN solution (fix, patch, hack, tweak, whatever you want to call it) that will accept connections on multiple hosts (2 or more) whilst keeping the same IP address subnet regardless of the server to which you connect to. Thanks for reading and sorry for the long post, I hope it gets the point across :P

    Read the article

  • Why do I sometimes get 'sh: $'\302\211 ... ': command not found' in xterm/sh?

    - by amn
    Sometimes when I simply type a valid command like 'find ...', or anything really, I get back the following, which is completely unexpected and confusing (... is command name I type): sh: $'\302\211...': command not found There is some corruption going on I think. I don't use color in my prompt, I am using the Bash shell in POSIX mode as sh (chsh to /bin/sh and so on - $SHELL is sh). What is going on and why does this keep happening? Anything I can debug? I think this is more of an xterm issue than sh, or at least a combination of the two. Files, for context: My /etc/profile, as distributed with Arch Linux x86-64: # /etc/profile #Set our umask umask 022 # Set our default path PATH="/usr/local/sbin:/usr/local/bin:/usr/bin" export PATH # Load profiles from /etc/profile.d if test -d /etc/profile.d/; then for profile in /etc/profile.d/*.sh; do test -r "$profile" && . "$profile" done unset profile fi # Source global bash config if test "$PS1" && test "$BASH" && test -r /etc/bash.bashrc; then . /etc/bash.bashrc fi # Termcap is outdated, old, and crusty, kill it. unset TERMCAP # Man is much better than us at figuring this out unset MANPATH My /etc/shrc, which I created as a way to have sh parse some file on startup, when non-login shell. This is achieved using ENV variable set in /etc/environment with the line ENV=/etc/shrc: PS1='\u@\H \w \$ ' alias ls='ls -F --color' alias grep='grep -i --color' [ -f ~/.shrc ] && . ~/.shrc My ~/.profile, I am launching X when logging in through first virtual tty: [[ -z $DISPLAY && $XDG_VTNR -eq 1 ]] && exec xinit -- -dpi 111 My ~/.xinitc, as you can see I am using the system as a Virtual Box guest: xrdb -merge ~/.Xresources VBoxClient-all awesome & exec xterm And finally, my ~/.Xresources, no fancy stuff here I guess: *faceName: Inconsolata *faceSize: 10 xterm*VT100*translations: #override <Btn1Up>: select-end(PRIMARY, CLIPBOARD, CUT_BUFFER0) xterm*colorBDMode: true xterm*colorBD: #ff8000 xterm*cursorColor: S_red Since ~/.profile references among other things /etc/bash.bashrc, here is its content: # # /etc/bash.bashrc # # If not running interactively, don't do anything [[ $- != *i* ]] && return PS1='[\u@\h \W]\$ ' PS2='> ' PS3='> ' PS4='+ ' case ${TERM} in xterm*|rxvt*|Eterm|aterm|kterm|gnome*) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033]0;%s@%s:%s\007" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; screen) PROMPT_COMMAND=${PROMPT_COMMAND:+$PROMPT_COMMAND; }'printf "\033_%s@%s:%s\033\\" "${USER}" "${HOSTNAME%%.*}" "${PWD/#$HOME/~}"' ;; esac [ -r /usr/share/bash-completion/bash_completion ] && . /usr/share/bash-completion/bash_completion I have no idea what that case statement does, by the way, it does look a bit suspicious though, but then again, who am I to know.

    Read the article

  • Recover strategy single bad sector in moricon

    - by Damon
    This week, my harddisk made me an early christmas present in the form of a single defect sector. To make up for the puny size of the present, it chose a sector inside moricons.dll for that. This means that now the system takes about 5 minutes to boot before Windows gives up and moves on, and there's 2 dozen scary "critical failure" entries in the system log after every boot, which is annoying. OK, admittedly, I shouldn't complain, it could be worse, the bad sector could be in ntldr... SMART info more or less indicates (for what SMART can indicate anyway) that the drive is mostly OK. Soft Read Error Rate has a score of 96, and Current Pending Sector Count has a raw value of 8, which translates to a score of 100. Acronis DriveMonitor makes this an issue (lowering the overall rating to 75%), HDD Health calls it "excellent", giving an overall rating of 95% (which is what this harddisk from day one). No single score is below 95 (power on hours and spin up count), and most are 100 anyway. Well, whatever, I've seen drives with perfect SMART values fail from one second to the other, and drives with moderate values work for years. So, I'm inclined not to put too much weight into that overall. TL;DR Now... to the problem: I don't feel like trashing the disk just yet (that's planned with a new OS install upgrading to Win7 early next year, independently of this issue), but in the mean time, I would still like to have a smoothly running system again. Therefore, I feel tempted to tamper with it, but before I render my system entirely unusable (since I've never done this before), I'd like to verify that my planned procedere is likely to suceed in having a working system again: Copy moricons.dl_ from the Windows install disk, rename it to moricons.zip, and unzip it. This gives an intact 5.1.2600.2180 version (the broken one is 5.1.2600.5512 - but I guess this makes not much of a difference, since it's an icon-only DLL, and an outdated copy should work better than one that can't be read) Run chkdsk /r /f` which will "repair" the file (i.e. delete the file without asking, tell the drive to remap the sector, and toss some unreadable junk into a file with a hexadecimal number) Hopefully Windows still boots after this (is that a reasonable expectation, or do I need to have something like BartPE ready? -- but then again, what's that good for in case chkdsk has nuked the entire file system...) Delete the junk file generated by chkdsk, copy the new DLL to %windir%\system32 Reboot. Pray. Maybe I just shouldn't touch anything, since it still kind of works... if annoying, but it works. Unsure... But, is there anything fundamentally wrong with the planned approach? Is this a sensible approach at all?

    Read the article

  • nginx rewrite for wikkawiki

    - by Hans
    Just setup WikkaWiki on my server, I have been trying to have the links go from wiki.mysite.info/wikka.php?wakka=Start into wiki.mysite.info/DotMG. I tried following their guide at http://docs.wikkawiki.org/ModRewrite, however it seems incomplete and outdated. Furthermore, as of version 1.3.2 base_url isn't even manually configurable from the wikka.config.php file. I am using version 1.3.2 of WikkaWiki. My nginx virtual hosts file contains: server { listen 80; server_name wiki.mysite.info; root /usr/share/nginx/wikka/; access_log /usr/share/nginx/.access/wikka; error_log /usr/share/nginx/.error/wikka error; location / { index index.php; try_files $uri $uri/ @wikka; } location @wikka { rewrite ^(.*/[^\./]*[^/])$ $1/ last; rewrite ^(.*)$ /wikka.php?wakka=$1 last; } location ~* \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Thus far it works, I can go to wiki.mysite.info/APage and it'll display that page, however it doesn't work on all pages, sometime the browser simply downloads the page (For some reason it always downloads the Start page). Also when I go to wiki.mysite.info/ it downloads the wikka.php file... Furthermore, the links on the wiki have the wikka.php?wakka= so whenever I navigate around the wiki, it goes back to being wiki.mysite.info/wikka.php?wakka=APage. I think something is wrong with my rewrite but I can't say for sure. Contents of the fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME $request_filename; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param HTTPS $server_https; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • FTP script needs blank line

    - by Ones and Zeroes
    I am trying to determine the reason for some FTP servers requiring a blank line in the script as follows: open server.com username ftp_commands bye Refer to blank line required after username credentials. Example from: FTP from batch file another reference to the same: http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.ibm.as400.misc/2008-05/msg00227.html Also discussed here: archive.midrange.com/midrange-l/200601/msg00048.html "The behavior I'm observing is the same as if I didn't specify the password to login." with an answer referring to our same fix... archive.midrange.com/midrange-l/200601/msg00053.html and archive.midrange.com/midrange-l/200601/msg00065.html Note: It is my experience that FTP questions attract uncouth responses. Admittedly FTP is outdated, but many clients still have legacy systems, which they cannot upgrade or replace. The reason thereof should not be discussed here. The intention of this question is to invite a positive response. Please do not respond if you disagree with the above. If you have never encountered this same issue, please do not respond. I suspect this may be limited to FTP scripts executed from Windows machines, but have been told that this happens often and with many different servers. My specific interest is to understand what may cause this as I have a real world example of a production system suddenly requiring this as a workaround fix, after running for many years without issue. The server belongs to a third party who claims no change on their end. Server details unknown and cannot be determined. Any help or encouragement from someone who has come across the same, would be appreciated. ps. Sorry for the many words and references to painful responses, but I have asked similar questions on serverfault and elsewhere and unfortunately got back kneejerk responses to FTP and respondents debating the validity of the question. I would truly not ask, or re-post this question online if I had a better understanding of the issue. I know of people who have seen this issue, but don't know what causes it. I am wary that this question would again turn into another irrelevant discussion. Please, I ask very nicely: Please do not respond if you have not encountered a similar issue. FURTHER EDIT: Please do not suggest changing the product. The problem is not the blank line requirement. We know this fixes the issue. The problem is not being able to explain the reason for the blank line in the first place. Slight difference, but a critical point to note wrt the answering of this question.

    Read the article

  • Ask How-To Geek: Learning the Office Ribbon, Booting to USB with an Old BIOS, and Snapping Windows

    - by Jason Fitzpatrick
    You’ve got questions and we’ve got answers. Today we highlight how to master the new Office interface, USB boot a computer with outdated BIOS, and snap windows to preset locations. Learning the New Office Ribbon Dear How-To Geek, I feel silly asking this (in light of how long the new Office interface has been out) but my company finally got around to upgrading from Windows XP and Office 2000 so the new interface it totally new to me. Can you recommend any resources for quickly learning the Office ribbon and the new changes? I feel completely lost after two decades of the old Office interface. Help! Sincerely, Where the Hell is Everything? Dear Where the Hell, We think most people were with you at some point in the last few years. “Where the hell is…” could possibly be the slogan for the new ribbon interface. You could browse through some of the dry tutorials online or even get a weighty book on the topic but the best way to learn something new is to get hands on. Ribbon Hero turns learning the new Office features and ribbon layout into a game. It’s no vigorous round of Team Fortress mind you, but it’s significantly more fun than reading a training document. Check out how to install and configure Ribbon Hero here. You’ll be teaching your coworkers new tricks in no time. Boot via USB with an Old BIOS Dear How-To Geek, I’m trying to repurpose some old computers by updating them with lightweight Linux distros but the BIOS on most of the machines is ancient and creaky. How ancient? It doesn’t even support booting from a USB device! I have a large flash drive that I’ve turned into a master installation tool for jobs like this but I can’t use it. The computers in question have USB ports; they just aren’t recognized during the boot process. What can I do? USB Bootin’ in Boise Dear USB Bootin’, It’s great you’re working to breathe life into old hardware! You’ve run into one of the limitations of older BIOSes, USB was around but nobody was thinking about booting off of it. Fortunately if you have a computer old enough to have that kind of BIOS it’s likely to also has a floppy drive or a CDROM drive. While you could make a bootable CDROM for your application we understand that you want to keep using the master USB installer you’ve made. In light of that we recommend PLoP Boot Manager. Think of it like a boot manager for your boot manager. Using it you can create a bootable floppy or CDROM that will enable USB booting of your master USB drive. Make a CD and a floppy version and you’ll have everything in your toolkit you need for future computer refurbishing projects. Read up on creating bootable media with PLoP Boot Manager here. Snapping Windows to Preset Coordinates Dear How-To Geek, Once upon a time I had a company laptop that came with a little utility that snapped windows to preset areas of the screen. This was long before the snap-to-side features in Windows 7. You could essentially configure your screen into a grid pattern of your choosing and then windows would neatly snap into those grids. I have no idea what it was called or if was anymore than a gimmick from the computer manufacturer, but I’d really like to have it on my new computer! Bend and Snap in San Francisco, Dear Bend and Snap, If we had to guess, we’d guess your company must have had a set of laptops from Acer as the program you’re describing sounds exactly like Acer GridVista. Fortunately for you the application was extremely popular and Acer released it independently of their hardware. If, by chance, you’ve since upgraded to a multiple monitor setup the app even supports multiple monitors—many of the configurations are handy for arranging IM windows and other auxiliary communication tools. Check out our guide to installing and configuring Acer GridVista here for more information. Have a question you want to put before the How-To Geek staff? Shoot us an email at [email protected] and then keep an eye out for a solution in the Ask How-To Geek column. Latest Features How-To Geek ETC How to Upgrade Windows 7 Easily (And Understand Whether You Should) The How-To Geek Guide to Audio Editing: Basic Noise Removal Install a Wii Game Loader for Easy Backups and Fast Load Times The Best of CES (Consumer Electronics Show) in 2011 The Worst of CES (Consumer Electronics Show) in 2011 HTG Projects: How to Create Your Own Custom Papercraft Toy Download the New Year in Japan Windows 7 Theme from Microsoft Once More Unto the Breach – Facebook Apps Can Now Access Your Address and Phone Number Dial Zero Speeds You Through Annoying Customer Service Menus Complete Dropquest 2011 and Receive Free Dropbox Storage Desktop Computer versus Laptop Wallpaper The Kids Have No Idea What Old Tech Is [Video]

    Read the article

  • How to find domain registrar and DNS hosting with good DNSSEC support?

    - by rsp
    Simplified problem I want to buy a domain and make a website that is fully secured with DNSSEC. Background I've been hearing about the insecurity of DNS for years. I've watched all of the talks by Dan Kaminsky and others from DNS exploits to The future of DNS Security Panel. I knew that using DNS without security is a disaster waiting to happen. I followed the development of the DNSSEC standard. I celebrated the key signing ceremony. Everything was on the right track to finally have a secure DNS system in place. And now more than 2 years later I wanted to just do what everyone said I should do: use DNSSEC for a new domain. So I need a domain registrar and a DNS hosting service that supports DNSSEC. Surprisingly it is not that easy to even find out who does support DNSSEC. It was actually much easier to find info on DNSSEC two years ago when everyone was going to support DNSSEC Real Soon Now but now years passed and I hardly see any progress done. I just hope that I was just looking in the wrong places and someone here will explain all of the doubts. I hope that other people who want to have a secure website will also find this question useful. What is needed registrar and DNS servers with full DNSSEC support for .com domains What is not needed IPv6 support Web hosting anything more What I found out so far Go Daddy offers Premium DNS service for additional $36 per year that lets you "Secure up to 5 domains with DNSSEC". easyDNS has DNSSEC available in Beta across all service levels (you need to enable the "beta" flag in configuration) but it doesn't seem to be production ready and judging from the lack of updates it isn't a feature of highest priority (the last update from March 2011 on the easyDNS blog). Name.com - according to The Register (US domain registrar does IPv6, DNSSEC) it has DNSSEC support since 2010 but right now (October 2012) I couldn't find anything related to DNSSEC on their website. Dynadot that is very often recommended doesn't support DNSSEC Namecheap that is also often recommended doesn't support DNSSEC. The support answer from 2011 suggested that it was being added but in 2012 still no ETA is given to customers. DynDNS was supposed to support DNSSEC, I found a link explaining DNSSEC support but it gives 404 Not Found page and offers a search box - when searching for DNSSEC I get "No results were found for your query." GKG was recommended online for DNSSEC support but it's hard to find any information on the level of DNSSEC support - there is a brief explanation on what is DNSSEC and how to sign Delegation Signer records in their FAQ but no information about the level of actual support can be found. Ask Slashdot: Which Registrars Support DNSSEC? from July 2011 - Answers list Go Daddy, DynDNS, GKG, Name.com as registrars that support DNSSEC but: see above. Related questions How to find web hosting that meets my requirements? What is needed to add DNSSEC to my site? DNS hosting better managed by Domain provider or Hosting provider? Registrar with good security, DNS hosting, and DNSSEC and IPv6 resolvers? In no. 1 no one is ever mentioning DNS at all. In no. 2 answers only mention the .se TLD, there are very few answers and they seem very outdated. In no. 3 one answer says "On projects that demand higher security, I might look for a web host that supports DNSSEC" but no more information is provided. The only relevant answers are in no. 4 where easyDNS is recommended by someone who has never used them personally. Meanwhile, as of October 2012, the support of DNSSEC is described as "in beta" on the easyDNS feature list. Another one recommends SiteGround but searching their site for DNSSEC returns no results. Other answers recommend web hosting providers that don't meet the requirement of DNSSEC support. Also the question mentioned above lists 9 very specific requirements other than only DNSSEC (like eg. HTTP-only login cookies, two-factor authentications, no DNS record limits, DNS statistics of queries/day, audit trails etc.) which might have excluded many possible recommendations if one is only interested in DNSSEC support. Conclusions I thought that by the end of 2012 the support of DNSSEC among domain registrars and DNS providers would be nearly universal. I am shocked that the support seems virtually nonexistent. Is this a result of some serious problems with the DNSSEC adoption? Or is it just not a hot topic and no one bothers anymore? According to the DNSSEC Scoreboard roughly about 0.1% of .com domains support DNSSEC. Could that be caused by the lack of DNSSEC support among registrars and DNS providers, is the information too hard to find or maybe no one cares? There is even no "dnssec" tag here. Questions The information is surprisingly hard to find. That is why I am asking for first-hand experience and personal recommendations. Has anyone here actually set up a website with DNSSEC, from the domain registration to the configuration of DNS servers? Can anyone recommend any of the registrars mentioned above? Can anyone recommend any registrar not mentioned above?

    Read the article

  • Dual Boot Oracle Solaris 11/11 and Linux (Ubuntu 11.10/grub2)

    - by HartmutStreppel
    After having worked with Open Solaris on my laptop first, then with an upgrade to Oracle Solaris 11 Express, I finally did a fresh install of Oracle Solaris 11/11, when it became available. I am not a big fan of upgrades as I know that I am not the perfect administrator and my system gets spoiled with unclean configurations, outdated packages and wrong settings that cannot be reversed. So I prefer to start from scratch. Especially with Oracle Solaris 11 I wanted to have a system just like a customer would have it in production. The installation was smooth - more or less, if I had only read the documentation a bit better in advance. For a number of reasons I prefer a dual boot system. The most important one is, that especially with mobile devices you often run into network problems. And you have a hard time figuring out where the problem is: in your laptop hardware, in the OS you are running, or really within the network. If you have an alternate OS to boot, you can exclude the OS and your hardware. This makes you feel better. The second OS should be a Linux variant - and for some not so obvious reason I decided to go with the latest Ubuntu release (11.10). It replaced a very old Open Suse installation that had not been booted for a while. I knew that it was probably best to install Ubuntu first and then Oracle Solaris 11, as this would put the right boot information for Oracle Solaris  into the MBR and onto the root partition. But then, how to enable dual boot with the 2 OSes. Searching the web one mainly finds information about dual boot of: Linux and Linux Linux and Windows I do not want to explain which wrong configurations I worked through, but I prefer to explain the final setup, which is extremely simple, and I am wondering why this is not covered as the easiest solution for most dual boot setups. I use chainloader from and to both OS'es, with the only disadvantage that I have to confirm two grub menus each time I want to boot the "other" OS. Still there were some hurdles to jump over: Ubuntu did not like getting its boot blocks being placed on the partition instead of the disk; I must admit that I do not fully understand why. But using the --force option you could get that done Ubuntu needs an active partition; that was easy to achieve grub2 uses a different numbering scheme for the partitions. That is in the docs, if you read them. BTW: The usual disclaimer is valid. There is  no guarantee that what I describe works or works well. Please back up your data carefully before trying any of this. So, Oracle Solaris 11 is installed on the first partition and Ubuntu on the third. With Ubtuntu things initially were a bit more complicated, as I did not know how to boot it. And the live CD did not offer the capability to boot the on-disk image (at least I did not find it). So I booted the live CD, mounted the Ubuntu installation at /mnt and wrote the boot blocks into the partition. This is something that does not seem to be recommended, at least grub-install refrained from doing what I intended. After a bit more research I was bold enough to use the --force option and wrote the boot blocks to /dev/sda3 using grub-install --boot-directory=/mnt/boot --force --no-floppy /dev/sda3 So, I now had a system with the Solaris boot loader in the MBR, Solaris specific boot blocks on the Solaris root partition and Ubuntu specific boot blocks in the Ubuntu partition. I just had to chain them together and I was done. Oracle Solaris 11: I have added the following lines to /rpool/boot/grub/menu.lst (be aware of the /rpool!!!!) title Ubuntu 11.10root (hd0,2)makeactivechainloader +1boot The Ubuntu root file system sits on the third partition (/dev/sda3). Ubuntu: I have added the following lines to /etc/grub.d/40_custom: menuentry "Solaris 11/11" {      set root=(hd0,1)      chainloader +1} Two things need to be mentioned: a) grub2 starts numbering partitions with 1; so my /dev/sda1 is partition 1. b) Oracle Solaris boots without the partition being made active (btw: the command to make a partition active with grub2 is "parttool (hd0,1) boot+", which currently does not work for me). As debugging grub is a bit complicated, I used the grub CLI to perform some tests and also used a tool, that I found on sourceforge.net that was able to prepare a list of all boot loaders on all partitions. This told me that the basic setup was correct. Unfortunately I lost it in the live CD environment. I hope this is helpful for some of the readers.Hartmut

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21  | Next Page >