Search Results

Search found 18096 results on 724 pages for 'let me be'.

Page 272/724 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • "Reboot and select proper boot device"?

    - by overtherainbow
    Hello I didn't find the answer in Clonezilla's site/mailing list archives. Maybe someone has already seen this issue and knows how to recover from it: On a test host, using www.partedmagic.com, I created two partitions: One to hold an OS I wish to use for testing (/sda1), and a second partition to hold images (/sda2) After trying out Windows7, I used CloneZilla to restore an XPSP3 image, but I get the following error message when rebooting: "Reboot and select proper boot device" Could it be that Clonezilla didn't save/restore the MBR? Gparted didn't let me set a partition as "active", so it could also be this, but I have no idea. Thank you for any help.

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • Documenting and enforcing programming standards and guidelines for shared library

    - by dreza
    Myself and another developer with the go ahead from our IT director have started a general purpose library in .NET with the intention that it will provide many common purpose classes that we use in our day to day development. During discussions and design of the library we have come up with a set of standards that we want the library to follow to ensure it is maintained and expanded on in a consistent manner. What is the best way to ensure these decisions we made for the library get feed to the other developers who might be using and adding to this library in the future. One of our decisions was to ensure we review all checked in code so we expect initially there to be some differences in coding styles of individuals not fitting in with the project standards. Some ideas I had were: Add a Read-me.txt to the project that outline the guidelines and standards Send an email out to everyone in the team to let them know about the project etc Call a team meeting to go through this new project and our expectations and standards we were aiming to follow Try and enforce the standards via Visual Studio (not sure if this would be possible or how just an idea) At the moment there is no general company programming standards so this would be a first really insofar as we are creating a standard that different project teams would need to adhere to.

    Read the article

  • How will technological singularity affect programmers?

    - by Amir Rezaei
    I'm one of the believers that think that we will hit the technological singularity sooner or later. Then the question is if any profession will be unaffected by changes that will come. In the end it will be we programmers that will implement the first self-aware AI. How will technological singularity affect us programmer? What is your professional opinion regarding technological singularity? EDIT: By self-aware I refer to an entity that questions and seek answers, able to analyze and solve problem. Artificial neural network is branch in mathematics/statistics with many widely used algorithms. The algorithms are applied where recognition of data is needed. For example hidden Markov model is used for voice recognition. Another well-known area is business intelligence and data mining. Today algorithms are self-learning. That is a bit of AI what many never think of. Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. Link to Ref.

    Read the article

  • Computer and internet use on a boat?

    - by dlamblin
    Let's assume that I seriously would like to be able to use my computer as easily and as carefreely while on a boat as I do while at home. In short, if I were on a boat for extended periods, such as two weeks at a time and occasionally over a few days out of port (this would be a 2 person sailing vessel), what would my options be for power, durability, and of course, internet access? I want more than just email, as I will likely keep doing a tad of development, and will be looking at google earth from time to time. I am already assuming that we're going with some kind of laptop, maybe even a macbook pro 12". I personally feel that netbooks are underpowered. And the last computer that I got a boat-friendly rating for was a ZX Spectrum, which was before the internet, and was likely due to it's 12vdc-happiness.

    Read the article

  • NFS Client reports Permission Denied, Server reports Permission Granted

    - by VxJasonxV
    I have two RedHat 4 Servers. The client is 4.6, the server is 4.5. I'm attempting to mount a share from the server, onto the client via NFS. The /etc/exports configuration is as follows: /opt/data/config bkup(rw,no_root_squash,async) /opt/data/db bkup(rw,no_root_squash,async) exportfs returns these (among other) shares, nfs is running according to ps output. I've been attempting to use autofs on the client, but have opted to just mount the share manually considering the issues I'm having. So, I issue the mount request: mount dist:/opt/data/config /mnt/config mount: dist:/opt/data/config failed, reason given by server: Permission denied Ok, so let's see what the server has to say for itself. May 6 23:17:55 dist mountd[3782]: authenticated mount request from bkup:662 for /opt/data/config (/opt/data/config) It says it allowed the mount to take place. How can I diagnose why the client and server are disagreeing on the result?

    Read the article

  • Windows 2003 SBS: no more CAL sold

    - by Gregory MOUSSAT
    I just discovered a hidden unmanaged server into a remote location. This is a Windows 2003 SBS with 5 CAL per device. There is currently 12 computers connected. So I want to buy more CALs. But SBS 2003 CALs are not sold anymore. Neither SBS 2008 CALs, which can be downgraded to 2003. And 2011 CALs can't be downgraded. So no legal solution if we want to stay with 2003. Sort of programmatic obsolescence. We can upgrade the server to 2011. But I'd like to let him as is (I don't "repair" working servers, and this often lead to bigger problems, especially on those non managed servers). Anyone see another solution ?

    Read the article

  • How to make a pxe bootable 10MegaByte or larger dos image?

    - by rjt
    Would like to make a "massive" DOS floppy disk image, say 10MegaBytes or more containing all the firmware updates i need for any system, harddrive, BIOS. i do not need the DOS image to be networkable as everything will be on the PXE booted image, but networking would be nice. Since ZipDisks were attached to the floppy disk controller and were over 100MegaBytes, this should be possible. i tried a long time ago to do this and spent too much time on it only to have it fail to boot. So if someone has reliable instructions on how to create such a nightmarish beast and edit it, please let me know. One image that can used for PXE and copied to a USB stick would be a plus. Too bad manufacturers don't supply a single bootable Linux ISO containing all their firmware updates, that would be easy to boot over-the-lan and have networking. HP servers do this and it is awesome.

    Read the article

  • SkyDrive and Consumer Cloud Services

    - by Tim Murphy
    Paul Thurrrott recently posted an article on the future of SkyDrive and I was asked what I thought about its future by @UserCommunity.  So let’s take a look. The breakdown from Microsoft that Paul described I believe is an accurate representation of users and usages. While I can’t say that I leverage SkyDrive to the extent that it was meant to be I do enjoy having OneNote hosted their and being able to consult and edit it from the desktop, web and Windows Phone. Taking that one step further is the Midwest Geeks group which started as the community of Microsoft related user groups in our region uses SkyDrive groups and shares calendars and documents.  This collaboration aspect isn’t new in itself, but having it connected with the rest of your cloud assets makes life easier. Another recent usage of this type of cloud service is storing your personal music files in order to get that same universal access.  This is a scenario that has some arguments for and against.  On the one hand own once and listen anywhere is great, but the on the other hand the bandwidth cost becomes a giant downside.  This is especially the case since most carriers are now doing away with unlimited data packages. Ultimately I see this type of resource growing an evolving at a phenomenal rate over the next few years as we continue to become more mobile.  Having multiple players such as SkyDrive and iCloud will only help to give us more options.  Only time will tell where we end up next. del.icio.us Tags: SkyDrive,Cloud Services,Paul Thurrott,UserCommunity

    Read the article

  • Using JavaScript for basic HTML layout

    - by Slobaum
    I've been doing HTML layout as well as programming for many years and I'm seeing a growing issue recently. Folks who primarily do HTML layout are becoming increasingly more comfortable using JavaScript to solve basic page layout problems. Rather than consider what HTML is capable of doing (to hit their target browsers), they're slapping on bloated JS frameworks that "fix" fairly basic problems. Let's get this out of the way right here: I find this practice annoying and often inconsiderate of those with special accessibility needs. Unfortunately, when you try to tell these folks that what they're doing isn't semantic, ideal, or possibly even a good idea, they always counter with the same old arguments: "JavaScript has a market saturation of 98%, we don't care about the other 2%." or "Who doesn't have JavaScript enabled these days?" or simply "We don't care about those users." I find that remarkably short-sighted. I would really like the opinion of the community at large. What do you think, am I holding too fast to a dying ideal? I am willing to accept that, but would like to be given good arguments as to why I should disregard 2%+ of my user base (among others). Is JavaScript's prevalence a good excuse to use a programmatic language to do basic layout, thus mucking up your behavior and layout? jQuery and similar "behavior" based frameworks are blurring the lines, especially for those who don't realize the difference. Honestly (and probably most importantly), I would like some "argument ammo" to use against these folks when the "it's the right way to do it" argument is unacceptable. Can you cite sources outlining your stance, please? Thanks everybody, please be civil :)

    Read the article

  • release management system - architectural question

    - by Sonic Soul
    Every place i've worked created their own release process, and all of them worked pretty well, however it took pretty good effort (and often a dedicated team) to manage releases. I am currently at a new place, and about to design such a system, however this time the team is very lean and we won't have dedicated resources to releasing. It will be up to development manager until the system is proofed enough for other developers to use. we're using Subversion as code repository, Team City as the build server, Jira issue tracker, Oracle db. I was thinking about writing a basic workflow app, that will let developers create a new manifest which will specify the following items. release details (who, jira issues etc) workflow step (dev, test, uat, prod approved, prod released) source files that last item is where it can get hairy. especially with database scripts. Figured I'd ask if there is a good pattern, or off the shelf product that could help with the database part, or perhaps the whole process. I briefly tested Red Gate Oracle deployment tool, but it didn't work out as well as I had hoped (from 1 day of testing at least) Questions: I think I could get around releasing of our code with something like Octopus Deploy straight from Team City. I am not clear however, how I could create a simple database deployment part, that will track which version of which script (from subversion) has been deployed where. Is there already some utility I could utilize for navigating subversion to choose which scripts should be released, instead of writing one from scratch. I'd just need it to produce some manifest of paths + versions.

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • How to change font/number/bullet format for the whole list

    - by Nam Gi VU
    Hi, Let's look at a sample word 2010 file here. In this file you can see that the first group of text has their indent modified. The 2nd, 3rd group keep their default format. I want to apply the changed I've made in the group 1st so group 2nd and 3rd and for the new group I'm going to add instead of recieving the default indent. How can I do that? ps. Currently what I have to do is quite boring: Copy group 1 and paste to create new group to get the formatted, then remove the old text, adding new text :(

    Read the article

  • Windows 7 does not recognise second display output [closed]

    - by gilles27
    I've got a PC with dual BenQ G2222HDL monitors and an ATI Radeon HD 4650 video card. I've been running both monitors at 1920x1080 for some months now but last week the second monitor switched to a lower resolution and won't let me go back to 1920x1080. If I right click the Desktop and choose Screen Resolution from the menu, I get two items in the Display: drop down list BenQ G2222HDL D-SUB Display device on: VGA In the past 2 was always the same as 1. If I click Detect a third item appears Available display output on: ATI Radeon HD 4650 but if I use the Multiple displays: drop down list to use it says "No display detected" and then lets me choose from either "Connect anyway on S-Video" or "Connect anyway on Component", neither of which help. It seems like Windows 7 recognises the card is dual-head, and knows I have got two monitors, but can't link it all together. I have checked and all my drivers are up-to-date. Does anyone have any suggestions as to how I can get the second monitor working properly again?

    Read the article

  • JavaScript objects and Crockford's The Good Parts

    - by Jonathan
    I've been thinking quite a bit about how to do OOP in JS, especially when it comes to encapsulation and inheritance, recently. According to Crockford, classical is harmful because of new(), and both prototypal and classical are limited because their use of constructor.prototype means you can't use closures for encapsulation. Recently, I've considered the following couple of points about encapsulation: Encapsulation kills performance. It makes you add functions to EACH member object rather than to the prototype, because each object's methods have different closures (each object has different private members). Encapsulation forces the ugly "var that = this" workaround, to get private helper functions to have access to the instance they're attached to. Either that or make sure you call them with privateFunction.apply(this) everytime. Are there workarounds for either of two issues I mentioned? if not, do you still consider encapsulation to be worth it? Sidenote: The functional pattern Crockford describes doesn't even let you add public methods that only touch public members, since it completely forgoes the use of new() and constructor.prototype. Wouldn't a hybrid approach where you use classical inheritance and new(), but also call Super.apply(this, arguments) to initialize private members and privileged methods, be superior?

    Read the article

  • finding the user of iis apppool \ defaultapppool

    - by LosManos
    My IIS apppool user is trying to create a folder but fails. How do I find out which User it is? Let's say I don't know much about IIS7 but need to trace whatever is happening through tools. Place of crime is WinSrv2008 with IIS7. So I fire up Sysinternals/ProcessMonitor to find out what is happening. I find Access denied on a folder just as I suspected. But which user? I add the User column to the output and it says IIS Apppool\Defaultapppool in capitals. Well... that isn't a user is it? If I go to IIS and its Apppools and Advanced settings and Process model and Identity I can see clues about which user it is but that is only because I know IIS. What if it had been Apache or LightHttpd or whatever? How do I see the user to give the appropriate rights to?

    Read the article

  • Naming conventions: camelCase versus underscore_case ? what are your thoughts about it? [closed]

    - by poelinca
    I've been using underscore_case for about 2 years and I recently switched to camelCase because of the new job (been using the later one for about 2 months and I still think underscore_case is better suited for large projects where there are alot of programmers involved, mainly because the code is easier to read). Now everybody at work uses camelCase because (so they say) the code looks more elegant . What are you're thoughts about camelCase or underscore_case p.s. please excuse my bad english Edit Some update first: platform used is PHP (but I'm not expecting strict PHP platform related answers , anybody can share their thoughts on which would be the best to use , that's why I came here in the first place) I use camelCase just as everibody else in the team (just as most of you recomend) we use Zend Framework which also recommends camelCase Some examples (related to PHP) : Codeigniter framework recommends underscore_case , and honestly the code is easier to read . ZF recomends camelCase and I'm not the only one who thinks ZF code is a tad harder to follow through. So my question would be rephrased: Let's take a case where you have the platform Foo which doesn't recommend any naming conventions and it's the team leader's choice to pick one. You are that team leader, why would you pick camelCase or why underscore_case? p.s. thanks everybody for the prompt answers so far

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Ignoring GET parameters in Varnish VCL

    - by JamesHarrison
    Okay: I've got a site set up which has some APIs we expose to developers, which are in the format /api/item.xml?type_ids=34,35,37&region_ids=1000002,1000003&key=SOMERANDOMALPHANUM In this URI, type_ids is always set, region_ids and key are optional. The important thing to note is that the key variable does not affect the content of the response. It is used for internal tracking of requests so we can identify people who make slow or otherwise unwanted requests. In Varnish, we have a VCL like this: if (req.http.host ~ "the-site-in-question.com") { if (req.url ~ "^/api/.+\.xml") { unset req.http.cookie; } } We just strip cookies out and let the backend do the rest as far as times are concerned (this is a hackaround since Rails/authlogic sends session cookies with API responses). At present though, any distinct developers are basically hitting different caches since &key=SOMEALPHANUM is considered as part of the Varnish hash for storage. This is obviously not a great solution and I'm trying to work out how to tell Varnish to ignore that part of the URI.

    Read the article

  • Getting GTalk and Alt-Tab to play nice

    - by Steve Armstrong
    I'm running GTalk on Windows 7, and it refuses to work properly with Alt-Tab. Let's say I've got 2 message windows open (msg1 and msg2) and the contact list, as well as Firefox. Alt-tab from Firefox to msg1 works, but now the contact list is the most recent thing in the alt-tab list (not Firefox as expected). Then there's the problem that I can't use alt-tab to select a specific message window, and when switching back to msg2 (showing msg2 in the alt-tab window) it might switch back and have focus on msg1. I found this thread complaining about the same problem, but it's over a year old, and I'm hoping some progress has been made.

    Read the article

  • Disable the Old Adobe Flash Plugin in Google Chrome

    - by The Geek
    If you’ve just updated to the Dev or Beta release of Google Chrome, you might have noticed that a special version of Adobe Flash is now integrated into the default distribution of Chrome. But what about your old plug-in? As it turns out, the old plug-in is generally still installed… but you can easily disable Chrome plug-ins in the latest version, so let’s get to work. Disable the Extra Flash Plug-in Head over to about:plugins and look through the list—you should notice two Shockwave Flash plugins. The first one should be in your Google Chrome installation folder, and has the filename gcswf32.dll. This is the NEW one, so don’t disable it! If you keep scrollling down, you’ll see the old one, with the file name NPSWF32.dll. This is the OLD plugin, and you can safely disable it. Of course, if you only use Chrome you could just completely uninstall Adobe Flash from your system by heading into Control Panel’s Uninstall Programs screen, and then finding and uninstalling Adobe Flash Player Plugin. The ActiveX version is for Internet Explorer. We’ve not done any testing to see if the old Flash plugin is even still active or not, but may as well disable it just to be sure, right? Similar Articles Productive Geek Tips How To Disable Individual Plug-ins in Google ChromeSearch for Install Packages from the Ubuntu Command LineStop YouTube Videos from Automatically Playing in ChromeHow To Disable Javascript in Adobe Reader and Patch the Latest Massive Security HoleStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird

    Read the article

  • Step by Step screencasts to do Behavior Driven Development on WCF and UI using xUnit

    - by oazabir
    I am trying to encourage my team to get into Behavior Driven Development (BDD). So, I made two quick video tutorials to show how BDD can be done from early requirement collection stage to late integration tests. It explains breaking user stories into behaviors, and then developers and test engineers taking the behavior specs and writing a WCF service and unit test for it, in parallel, and then eventually integrating the WCF service and doing the integration tests. It introduces how mocking is done using the Moq library. Moreover, it shows a way how you can write test once and do both unit and integration tests at the flip of a config setting. Watch the screencast here: Doing BDD with xUnit, Subspec and on a WCF Service  Warning: you might hear some noise in the audio in some places. Something wrong with audio bit rate. I suggest you let the video download for a while and then play it. If you still get noise, go back couple of seconds earlier and then resume play. It eliminates the noise.  The next video tutorial is about doing BDD to do automated UI tests. It shows how test engineers can take behaviors and then write tests that tests a prototype UI in isolation (just like Service Contract) in order to ensure the prototype conforms to the expected behaviors, while developers can write the real code and build the real product in parallel. When the real stuff is done, the same test can test the real stuff and ensure the agreed behaviors are satisfied. I have used WatiN to automate UI and test UI for expected behaviors. Doing BDD with xUnit and WatiN on a ASP.NET webform Hope you like it!

    Read the article

  • Setting Up Multiple Domains (plus wildcard subdomains) to Point to the Same Site/VirtualHost

    - by Derek Reynolds
    I have my primary domain with wildcard subdomains setup already. username.maindomain.com and maindomain.com I want to provide my users with additional domains that they can select. additional1.com, additional2.com, additional3.com... These additional domains would also need to support wildcard subdomains (as the subdomains route to a username). Anyone know how to properly configure this in DNS and VirtualHost config? Currently I have the additional domains as A records pointing to the same IP as my main domain (with a wildcard subdomain A record for each as well). In my VirtualHost config I am placing the additional domain names in the ServerAlias directive. Let me know if any more detail is needed.

    Read the article

  • Setting up multiple wireless access points on same network

    - by SqlRyan
    I'd like to add wireless to my network, and I need multiple access points to cover the whole area. I'd like to set them up so that there's only one "wireless network" that the clients see, and it switches them as seamlessly as possible between access points as they wander around (if that's not possible, then at least have it so that they don't need to set up the security by hand on each one the first time, if possible). I've searched online, and there are quite a few sets of mixed instructions (same vs different SSID, frequency, does the security need to match exactly, etc.). Can somebody who has some experience doing this please let me know what they did? I imagine it's pretty simple, but there seems to be no clear cut "yes, you can do this" online, even though I know you can. I have a mid-size LAN with about 20 workstations and two Domain Controllers on it. Also, I'll be doing this with consumer wireless components, if it makes a difference, not enterprise-level components (ie. Linksys rather than Cisco).

    Read the article

  • Top 10 OTN Tech Articles for 2012

    - by Bob Rhubart
    It takes a special kind of IT pro to risk additional carpal tunnel damage to pound out a technical article after spending the day wrestling with a keyboard in dealing with other duties. That kind of dedication is noteworthy, even more so if people actually take the time to read the resulting article. So if you know any of the authors listed below, skip the handshake and give them a congratulatory slap on the back for all that time spent torturing their tendons. Their hard work has earned a place on this list of  the Top 10 most popular OTN articles published in 2012.  Getting Started with Java SE Embedded on the Raspberry Pi by Bill Courington and Gary Collins How Dell Migrated from SUSE Linux to Oracle Linux by Jon Senger, Aik Zu Shyong, and Suzanne Zorn Exploring Oracle SQL Developer by Przemyslaw Piotrowski Getting Started with Oracle Unbreakable Enterprise Kernel Release 2 by Lenz Grimmer How to Get Started (FAST!) with JavaFX 2 and Scene Builder by Mark Heckler How to Use Oracle VM VirtualBox Templates by Yuli Vasiliev How to Update Oracle Solaris 11 Systems From Oracle Support Repositories by Glynn Foster Tips for Hardening an Oracle Linux Server by Lenz Grimmer and James Morris How To Configure Browser-based SSO with Kerberos/SPNEGO and Oracle WebLogic Server by Abhijit Patil How to Create a Local Yum Repository for Oracle Linux by Jared Greenwald Of course, OTN has a great many articles covering a broad range of topics of interest to Java developers, DBAs, sysadmins, solution architects, and everybody else who works keeping the IT world running. You'll find them here. If you have suggestions for topics or technologies you'd like to see covered, please let us know. And if you have insight and expertise to share, why not write your own article? Click here to learn how to get published on OTN.

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >