Search Results

Search found 16329 results on 654 pages for 'b long'.

Page 311/654 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • Need Help Debugging BSOD

    - by nmuntz
    For the last few days I've been getting a BSOD at least once a day. Apparently the crash has been caused by ntkrpamp.exe and searching on google has revealed that the issue seems to be related to bad RAM. I have run memtest and my RAM modules have no errors on them. I'm pretty sure the issue is related to drivers, but I can't figure out which one. I'm not an expert in interpreting the output of windbg so any pointers would be greatly appreciated. Here is my analysis with WinDbg. It was too long to paste it here. One strange thing i noticed is that my Windows XP is in Spanish (SP3) and ntkrpamp.exe copyright info seems to be in portuguese. Not sure if that could be an issue, perhaps i have the wrong ntkrpamp.exe... Thanks a lot in advance

    Read the article

  • Is there a way to get att.net email to stay connected?

    - by Clay Shannon
    My att.net account at home (wireless connection) has been bad for the last several days: I have to hit F5 quite a few times to "unfreeze" it (I can read an email or two, then it freezes, etc.). At work (company LAN) it's even worse: I can connect to the site and see that I have email, but can't open any of the emails - and the screen constantly refreshes (every couple of seconds) with a "Connecting..." message. It apparently connects and disconnects over and over again, but never stays connected long enough to actually access the email. Is there a way either to fix this OR forward my att.net (from home) to my work email address (accessible via MS Outlook)? Or set it up from work using Outlook to pull in my att.net email? I have Outlook 2003 at work.

    Read the article

  • Switching mdadm to an external bitmap

    - by Oli
    I've just read this in another post about improving RAID5/6 write speeds: After increasing stripe cache & switching to external bitmap, my speeds are 160 Mb/s writes, 260 Mb/s reads. :-D I've already found out how to increase the stripe cache and this worked pretty well but I'd like to know more about an external bitmap. I have an incredibly fast (540MB/s) RAID0 SSD that would do well if a bitmap does what I think it does but I'm still very unsure. I've only known about them as long as I've known this post. A few questions: What is a bitmap (in terms of mdadm)? What are the advantages of an internal bitmap (over external)? What are the advantages of an external bitmap (over internal)? How do I switch between the two? I should add that while this is a I'm-bored-let's-break-something thread, I do value the data stored on the RAID array. If doing this is going to put data at significant risk, please let me know.

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • How to fix this navigation issue in my site?

    - by David
    First off I use webs.com for the creation of my site. I have a very basic layout. List of links of the left and content on the right with a heading up top. Now in my list of links every link is an article that I wrote, I have about 25 links going down the left hand side of my site. Problem is when I try out new themes that support horizontal navigation as opposed to vertical navigation I get either a messy overflow of links Or a link called "more" which lists the rest of the articles in a drop down-list across my site. What I wish I had was a simple horizontal navigation like" "home, about, articles" and when the user clicks on articles it would then bring them to a page containing all my articles there. I would prefer it to be in a table like display. That way is not a long list. Anyways any ideas on how I can fix this issue im having? Please let me know if you need more information.

    Read the article

  • Capturing BizTalk 2004 SQLAdapter failures

    - by DanBedassa
    I was recently working on a BizTalk 2004 project where I encountered an issue with capturing exceptions (inside my orchestration) occurring from an external source. Like database server down, non-existing stored procedure, …   I thought I might write-up this in case it might help someone …   To reproduce an issue, I just rename the database to something different.   The orchestration was failing at the point where I make a SQL request via a Response-Request Port. The exception handlers were bypassed but I can see a warning in the event log saying: "The adapter failed to transmit message going to send port "   After scratching my head for a while (as a newbie to BTS 2004) to find a way to catch the exceptions from the SQLAdapter in an orchestration, here is the solution I had.   ·         Put the Send and Receive shapes inside a Scope shape ·         Set the Scope’s transaction type to “Long Running” ·         Add a Catch block expecting type “System.Exception” ·         Set the “Delivery Notification” of the associated Port to “Transmitted” ·         Change the “Retry Count” of the associated port to 0 (This will make sure BizTalk will raise the exception, instead of a warning, and you can capture that) ·         Now capture and do whatever with the exception inside the Catch block

    Read the article

  • Weird windows quirk when renaming file

    - by Roy Rico
    Hi, I am finding when I try to rename a file in windows 7, i typically select the file i want to rename, hit F2 (rename) and type the new name. I've been getting this weird bug which the while file name gets selected again, sometimes right in the middle of me typing. I can sit there and watch it reselect the entire filename every second or so as if someone is invisibly hitting control+a while type. I have tried making sure that my keys aren't being pressed, and also have even gotten a new keyboard and this problem still consists. I know it'll probably be a long shot, but just hoping someone's seen this issue and has a resolution. UPDATE: This is on a desktop machine, not a laptop, so there's no issue with Trackpad.

    Read the article

  • Graduating soon with a computer science degree, but have unique circumstances [closed]

    - by Donnie
    I joined the Navy in 1998, and was admitted into Nuclear Power Training. I got my electrician's mate certificate, but was put on medical hold when I was in Nuclear Power Training. I was sent to the Naval Hospital, and received a medical (honorable) discharge in the middle of 2000. I decided to stay at home and raise my son, and my girlfriend worked. a few years ago, I decided that I want to work as a programmer, so I went to college and will soon be graduating with a degree in computer science. I hope to finish with a relatively high GPA, 3.8 or 3.9. My question is this: How much, if any, of my Navy experience should I put on my resume? And how do I explain my nine year gap as a stay at home dad? Do I even try to explain it? I know recent college graduates typically have no experience, but obviously I'm not the typical college graduate. Will my long absence from working, or my relatively short duration in the Navy hurt my chances? Should I just put the college on my resume, and hope that HR thinks I'm younger than I am? Obviously, then, my age would show at the interview and there would be questions. Any help is appreciated.

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based? [closed]

    - by Parhs
    I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first experience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year I worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally I believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can the heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Of course browsers support shortcuts but the way to overide the defaults that browsers uses isn't cross compatible... That is a huge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness? What kind of application would you use?

    Read the article

  • quick check of open port

    - by shantanuo
    The following is working as expected. (do not want to use nmap) I need to use nc (or any other built-in centOS) command in shell script to check the port 6379 of a remote server. I want the script to exit quickly if no response received in less than 1 second. But it seems that nc will wait for too long before quitting with exit code of 1 How do I "quickly" check if the port is listening? # time nc -z 1.2.3.4 1234 real 0m21.001s user 0m0.000s sys 0m0.000s # echo $? 1 # time nc -z 1.2.3.4 6379 Connection to 1.2.3.4 6379 port [tcp/*] succeeded! real 0m0.272s user 0m0.000s sys 0m0.008s # echo $? 0

    Read the article

  • Using HTML5 Today part 4&ndash;What happened to XHTML?

    - by Steve Albers
    This is the fourth entry in a series of descriptions & demos from the “Using HTML5 Today” user group presentation. For practical purposes, the original XHTML standard is a historical footnote, although XHTML transitional will probably live on forever in the default web page templates of old web page editors. The original XHTML spec was released in 2000, on the heels of the HTML 4.01 spec.  The plan was to move web development away from HTML to the more formal, rigorous approach that XHTML offered, but it was built on a principle that conflicts with the history and culture of the Internet: XHTML introduced the idea of Draconian Error Handling, which essentially means that invalid XML markup on a page will cause a page to stop rendering. There is a transitional mode offered in the original XHTML spec, but the goal was to move to D.E.H.  You can see the result by changing the doc type for a document to “application/xhtml+xml” - for my class example we change this setting in the web.config file: <staticContent> <remove fileExtension=".html" /> <mimeMap fileExtension=".html" mimeType="application/xhtml+xml" /> </staticContent> With the new strict syntax a simple error, in this case a duplicate </td> tag, can cause a critical page error: While XHTML became very popular in the ensuing decade, the Strict form of XHTML never achieved widespread use. Draconian Error Handling was one of the factors that led in time to the creation of the WHATWG, or Web Hypertext Application Technology Group.  WHATWG contributed to the eventually disbanding of the XHTML 2.0 working group and the W3C’s move to embrace the HTML5 standard. For developers who long for XML markup the W3C HTML5 standard includes an XHTML5 syntax. For the longer, more definitive look at what happened to XHTML and how HTML5 came to be check out the Dive Into HTML mirror site or Bruce Lawson’s “HTML5: Who, What, When Why” talk.

    Read the article

  • Pathfinding with MicroPather : costs calculations with sectors and portals

    - by Adan
    Hello, I'm considering using micropather to help me with pathfinding. I'm not using a discrete map : I'm working in 2d with sectors and portales. However, I'm just wondering what is the best way to compute costs with this library in this context. Just to be more clear about geometrical shapes I'm using : sectors are basically convex polygons, and portals are segments that lies on sector's edge. Micropather exposes a pure virtual Graph class that you must inherate and overrides 3 functions. I understand how pathfinding works, so there's no problem in overriding those functions. Right now, my implementation give me results, i.e I'm able to find a path in my map, but I'm not sure I'm using an optimal solution. For the AdjacentCost method : I just take the distance between sector's centers as the cost. I think a better solution should be to use the portal between the two sectors, compute its center, and then the cost should be : distance( sector A center, portal center ) + distance ( sector B center, portal center ). I'm pretty sure the approximation I'm using with just sector's center is enough for most cases, but maybe with thin and long sectors that are perpendicular, this approximation could mislead the A* algorithm. For the LeastCostEstimate method : I just take the midpoint of the two sectors. So, as you understand, I'm always working with sectors' centers, and it's working fine. And I'm pretty sure there's a better way to work. Any suggestions or feedbacks? Thanks in advance!

    Read the article

  • Ubuntu 12.04 host lookups extremely slow

    - by tubaguy50035
    I'm having issues with one of my servers taking a long time to look up host names. This is an Ubuntu 12.04 box, so I've tried following the new resolvconf directives. In my /etc/network/interfaces file, I defined my name servers like this: auto eth0 iface eth0 inet static address someaddress netmask 255.255.255.0 gateway 198.58.103.1 dns-nameservers 74.14.179.5 72.14.188.5 In my /etc/resolv.conf, I see these name servers, like this: # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 74.14.179.5 nameserver 72.14.188.5 On another box, I edited the resolv.conf directly as directed by my hosts' setup help files. It looks like this: domain members.linode.com search members.linode.com nameserver 72.14.179.5 nameserver 72.14.188.5 options rotate This second box has no issues with host name look ups and responds quite quickly. Could not having the domain and search directives make my look ups slow? By slow, I mean it's taking anywhere from 5 to 15 seconds to find the IP address of a host.

    Read the article

  • Moving from a static site to a CMS with new URLs and meta-data for pages

    - by Chris J
    Hi I am in the process of rebuilding a site from static pages to a CMS which will be using mod_rewrite to generate new page URLs. In this process our marketing people and myself have decided to tidy up the descriptions, keywords and titles. Eg: a page which who's URL is currently "website-name/about_us.html" and has a title of "website-name - something not quite page specific" will change to "website-name/about-us/" and title: "about us - website-name" and may have a few keywords and the description changed. Our goal with updating the meta data is to improve our page rankings and try to keep in line with some best practices for SEO. Though our current page rankings are quite good in many aspects, there is room for improvement. All of the pages will also have content changes (like rearranging heading tags, new menu on all pages, new content in footer, extra pieces of dynamic content relating to other pages). In this new site process I plan to use 301 redirects for all the old URLs pointing to the new URLs. My question is what can I expect to happen to the page rankings in Google, in the sort term and long term? Will this be like kicking off a new site which will have to build up trust over time or will the original page rankings have affect?

    Read the article

  • Do most front and rear USB connections deliver the same power and performance?

    - by Bratch
    I was reading this Three Monitors For Every User and there were some comments about rear USB ports being able to deliver more power than front USB ports because they are directly connected to the motherboard and closer to the power supply (by circuit board runs). Even though the front USB ports may have connectors farther from the power supply, and there are cables from the motherboard to the front ports, I think that the difference in power would be negligible (unless the case is over 5 meters long). Anyone know for sure if they are the same or different? Note that I'm not talking about an older case where the front might have been USB 1.1 and the rear USB 2.0. A modern case would have USB 2.0 on all ports. And of course using a powered hub would deliver plenty of power.

    Read the article

  • Advanced (?) Excel sorting

    - by Preston Grayskull
    First of all, I'd like to admit that I don't really know anything about Excel, but I have tried to look up a solution to this in Excel books and Googling. Here's what I'm trying to do: I have a really long spreadsheet There are 7 columns total, but only two columns that I'm most interested in. Here's an example CSV that is much more simple than my actual dataset, but the search/sort is analogous: John, Apple Dave, Apple Dave, Orange Steve, Apple Steve, Orange Steve, Kiwi Bob, Apple Bob, Banana I'm interested in extracting the entire rows (all of the columns) that meet the following criteria: ["Apple"] OR ["Apple" and "Orange"] NOT ["Apple" and "Orange" and Anything Else] NOT ["Apple" and Anything that isn't Orange] So with the above CSV, I would get the entire rows for John and Dave, but not Steve and not Bob. I started doing this manually, and will likely finish by the time this question has an answer, but I would like to know this for future reference. Thanks!

    Read the article

  • Browser not parsing PAC file properly?

    - by mfinni
    I have a long PAC file. The browser(s) (IE and Chrome) are configured to use it and it generally does what it says on the tin. I have a domain that continues to go through the proxy although it should be going direct. // Match specific hosts and IPs entered as hosts if (buncha stuff || shExpMatch(host,"(*.newmarketinc.com)") || shExpMatch(host,"(newmarketinc.com)") || buncha stuff ) return "DIRECT"; Pactester shows that anything in the domain should be direct. h:\pacparser\pactester.exe -p h:\pacfile -u http://daas.newmarketinc.com DIRECT But we continue to pass traffic to hosts in this domain via the proxy. Wireshark and Fiddler both show this. How do i figure out how my browser has gotten brain-damage? Traffic to other sites in this stanza does properly go direct, as confirmed by Fiddler and Wireshark.

    Read the article

  • Tip: Keeping the ADF Mobile PDF Guide up to date

    - by Chris Muir
    This is a little tip for customers using Oracle's ADF Mobile. If you're like me, it's possible you don't rely on the online HTML version of the Mobile Developer's Guide for ADF, but rather download a PDF version of the file to use locally (look to the "PDF" link to the top right of the guide).  For me the convenience of the PDF is it's faster, I can search the whole document easily, I can split read the document across two pages on my home monitor, if I lose my internet connection the document is still available, and it's easy to read on my iPad (especially on long haul flights to the US across the Pacific where there is no internet connection!). The trigger point for me to download the Oracle PDF documentation has always been on a new point release of JDeveloper.  However in the case of ADF Mobile, as an extension to JDeveloper it is releasing at a much faster and independent schedule to JDeveloper and this includes updates to the documentation. As such the 11.1.2.4.0 ADF Mobile PDF guide you have locally might be out of date and you should take the opportunity to download the latest version.  This is also particularly important for ADF Mobile as not only are many new features being added for each release and included in the new documentation, but the guide is under rapid improvement to clarify much of what has been written to date.  Our documentation teams are super responsive to suggestions on how to improve the guides and this often shows per point release. How do you tell you've the latest guide? Look to the document part number which right now is "E24475-03".  This is a unique ID per release for the document, the first part being the document number, and the part after the dash the revision number.  If the website document number has a higher revision number, time to download a new up to date PDF. One last thing to share, you can follow the ADF Mobile guide document manager Brian Duffield on Twitter to keep abreast of updates. Image courtesy of Stuart Miles / FreeDigitalPhotos.net

    Read the article

  • How can I change ACLs recursively using cacls.exe?

    - by maaartinus
    I want to restrict the access for everything inside the work directory to me and the system only. I tried this with the following command: cacls.exe work /t /p 'PIXLA09\Maaartin:f' 'NT AUTHORITY\SYSTEM':f However it doesn't work at all. The following command should show only the two specified users but instead shows a very long list of permissions: cacls.exe work/somedirectory I tried to use /g instead of /p, too. Since I didn't use /e the permissions shouldn't get edited but replaced. Any ideas what's wrong?

    Read the article

  • HD read error while booting linux

    - by sidharth sharma
    I have been dual booting windows 7 and ubuntu on my laptop since the past 3 years and all was working fine until I started getting logs like ata1.00: status: { DRDY ERR } ata1.00: error: { UNC } ata1.00: configured for UDMA/133 sd 0:0:0:0: [sda] Unhandled sense code sd 0:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE sd 0:0:0:0: [sda] Sense key: Medium Error [Current][discreptor] I figured it was a hardware problem and ignored it as long as I could until the HD crashed on me. Then I got a brand new HD and put on windows and ubuntu afresh on it but the problem still persists. Any Help?

    Read the article

  • Splitting a tetris game apart - where to put time-management?

    - by nightcracker
    I am creating a tetris game in C++ & SDL, and I'm trying to do it "good" by making it object-oriented and keeping scopes small. So far I have the following structure: A main with some lowlevel SDL set up and handling input A game class that keeps track of score and provides the interface for main (move block down, etc) A map class that keeps track of the current game field, which blocks are where. Used by the game class. A block class that consists of the current falling block, used by game. A renderer class abstracting low level SDL to a format where you render "tetris blocks". Used by map and block. Now I have a though time where to place the time-management of this game. For example, where should be decided when a block bumps the bottom of the screen how long it takes the current block locks in place and a new block spawns? I also have an other unrelated question, is there some place where you can find some standard data on tetris like standard score tables, rulesets, timings, etc?

    Read the article

  • Configuring NVIDIA Quadro with Dell Precision M4600

    - by vsecades
    After a frustrating couple of weeks when I recently bought my Dell Precision laptop, I managed to fix an issue where Ubuntu (yes, was NOT using Windows, get serious) would not recognize the video card and would cause all sorts of problems all over the place. I ended up one Saturday morning nearly throwing this thing away, when I managed to find a post about NVIDIA Optimus technology... ( http://www.pcmag.com/article2/0,2817,2358963,00.asp ). Now I am a huge advocate of disruptive new stuff, as long as we keep the broader audience in mind. Anyhow, disabling this (which as the BIOS settings state only work on Windows 7 or later), effectively allow the NVIDIA based Ubuntu driver to kick in full force. No need for a trash can anymore thankfully. As I saw multiple posts all over the place about this, check your BIOS, disable and try the video again to see if this corrects your issues. Best of luck!

    Read the article

  • How to convince a client to switch to a framework *now*; also examples of great, large-scale php applications.

    - by cbrandolino
    Hi everybody. I'm about to start working on a very ambitious project that, in my opinion, has some great potential for what concerns the basic concept and the implementation ideas (implementation as in how this ideas will be implemented, not as in programming). The state of the code right now is unluckily subpar. It's vanilla php, no framework, no separation between application and visualization logic. It's been done mostly by amateur students (I know great amateur/student programmers, don't get me wrong: this was not the case though). The clients are really great, and they know the system won't scale and needs a redesign. The problem is, they would like to launch a beta ASAP and then think of rebuilding. Since just the basic functionalities are present now, I suggested it would be a great idea if we (we're a three-people shop, all very proficient) ported that code to some framework (we like CodeIgniter) before launching. We would reasonably be able to do that in < 10 days. Problem is, they don't think php would be a valid long-term solution anyway, so they would prefer to just let it be and fix the bugs for now (there's quite a bit) and then directly switch to some ruby/python based system. Porting to CI now will make future improvements incredibly easier, the current code more secure, changing the style - still being discussed with the designers - a breeze (reminder: there are database calls in template files right now); the biggest obstacle is the lack of trust in php as a valid, scalable technology. So well, I need some examples of great php applications (apart from facebook) and some suggestions on how to try to convince them to port soon. Again, they're great people - it's not like they would like ruby cause it's so hot right now; they just don't trust php since us cool programmers like bashing it, I suppose, but I'm sure going on like this for even one more day would be a mistake. Also, we have some weight in the decision process.

    Read the article

  • Use DOS batch to move all files up 1 directory

    - by Harminoff
    I have created a batch file to be executed through the right-click menu in Win7. When I right-click on a folder, I would like the batch file to move all files (excluding folders) up 1 directory. I have this so far: PUSHHD %1 MOVE "%1\*.*" ..\ This seems to work as long as the folder I'm moving files from doesn't have any spaces. When the folder does have spaces, I get an error message: "The syntax of the command is incorrect." So my batch works on a folder titled PULLTEST but not on a folder titled PULL TEST. Again, I don't need it to move folders, just files. And I would like it to work in any directory on any drive. There will be no specific directories that I will be working in. It will be random. Below is the registry file I made if needed for reference. Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Directory\shell\PullFiles] @="PullFilesUP" [HKEY_CLASSES_ROOT\Directory\shell\PullFiles\command] @="\"C:\\Program Files\\MyBatchs\\PullFiles.bat\" \"%1\""

    Read the article

  • Is it faster to create indexes before or after data loading in MySQL?

    - by Josh Glover
    I have a data replication process that drops and recreates a few tables in a target database, then loads them up with data from a source database (running on another host, but that is immaterial to the question at hand). The target database does need primary keys and a few other indexes on its tables, but not during the data loading. I'm currently loading all of the data, then creating the indexes. However, index creation takes a pretty long time--30 minutes of my data loader's 5 and a half hour running time. My intuition tells me that creating the indexes at the end should be faster than creating them first, since the index would need to be rewritten with each insert. Can anyone tell me for sure which way is faster? FWIW, I'm running MySQL 5.1 with InnoDB tables.

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >