Search Results

Search found 13653 results on 547 pages for 'old school'.

Page 329/547 | < Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >

  • Title of the page in search results and title of google's cached version are different. Why?

    - by Azmorf
    Check this: http://www.google.com/search?q=site:gunlawsbystate.com+kansas+gun+laws The title of the first result is "Kansas Gun Laws - Gun Laws By State". Although, on the page google has cached the title is different: <title>Kansas Gun Laws - Kansas Gun Law - Reciprocity Guide</title> Google shows the title that has been on the site 2-3 months ago. Google bot has visited the website a lot of times since that, and as you see it even cached it (the latest version is of 15th Sept), however for some reason it doesn't change the title to the new one in the search results. We use hash-bang URL structure on this website. It completely meets google's requirements for AJAX websites (_escaped_fragment_ stuff). The issue I explained is happening with almost all hash-bang pages that got indexed. Questions: Why does it keep old page title in the search results? Can it be connected to the fact that I'm using hash-bang URLs? There are lots of pages on the site that have the same issue, all of them have hash-bang URLs. Another thing I noticed is that Google's "Preview" feature doesn't work for any hash-bang URLs on the site. Did I do anything wrong? It has got cached versions of the pages, why wouldn't it generate a preview? Thanks (and sorry for my English) PS. Here's a weird thing I also noticed: this search query https://www.google.com/search?q=Kansas+Gun+Laws+-+Reciprocity+Guide shows the correct title for the same page as in the example above. Why does google show different titles for the same page when you run different queries?

    Read the article

  • How to use Pixel Bender (pbj) in ActionScript3 on large Vectors to make fast calculations?

    - by Arthur Wulf White
    Remember my old question: 2d game view camera zoom, rotation & offset using 'Filter' / 'Shader' processing? I figured I could use a Pixel Bender Shader to do the computation for any large group of elements in a game to save on processing time. At least it's a theory worth checking. I also read this question: Pass large array to pixel shader Which I'm guessing is about accomplishing the same thing in a different language. I read this tutorial: http://unitzeroone.com/blog/2009/03/18/flash-10-massive-amounts-of-3d-particles-with-alchemy-source-included/ I am attempting to do some tests. Here is some of the code: private const SIZE : int = Math.pow(10, 5); private var testVectorNum : Vector.<Number>; private function testShader():void { shader.data.ab.value = [1.0, 8.0]; shader.data.src.input = testVectorNum; shader.data.src.width = SIZE/400; shader.data.src.height = 100; shaderJob = new ShaderJob(shader, testVectorNum, SIZE / 4, 1); var time : int = getTimer(), i : int = 0; shaderJob.start(true); trace("TEST1 : ", getTimer() - time); } The problem is that I keep getting a error saying: [Fault] exception, information=Error: Error #1000: The system is out of memory. Update: I managed to partially workaround the problem by converting the vector into bitmapData: (Using this technique I still get a speed boost of 3x using Pixel Bender) private function testShader():void { shader.data.ab.value = [1.0, 8.0]; var time : int = getTimer(), i : int = 0; testBitmapData.setVector(testBitmapData.rect, testVectorInt); shader.data.src.input = testBitmapData; shaderJob = new ShaderJob(shader, testBitmapData); shaderJob.start(true); testVectorInt = testBitmapData.getVector(testBitmapData.rect); trace("TEST1 : ", getTimer() - time); }

    Read the article

  • Profit : August, 2012

    - by user462779
    August 2012 issue of Profit is now available online. Way back in 2003, I wrote my first feature for Profit. It was titled “Everything You Always Wanted to Know About Application Servers (But Were Afraid To Ask),” and it discussed “cutting-edge” technologies like portals and XML and the brand-new Java Platform, Enterprise Edition (Java EE; we’re now on Java EE 7). But despite the dated terms I used in my Profit debut, I noticed something in rereading that old story that has stayed constant: mid-tier technology is where innovative enterprise IT projects happen. It may have been XML in 2003, but it’s SOA in 2012. While preparing the August issue of Profit was more than just a stroll down memory lane for me, it has provided a nice bit of perspective about what changes and what doesn’t in this dynamic IT industry. Technologies continuously evolve—some become standard practice, some are revived or reinvented, and some are left by the wayside. But the drive to innovate and the desire to succeed are business principles that never go out of fashion. Also, be sure to check out the Profit JD Edwards Special Issue 2012 (PDF), featuring partner profiles, customer successes, and Oracle executive interviews. The Middleware Advantage Three ways a flexible, integrate software layer can deliver a competitive edge Playing to Win Electronic Arts’ superefficient hub processes millions of online gaming transactions every day. Adjustable Loans With Oracle Exadata, Reliance Commercial Finance keeps pace with India’s commercial loan market. Future Proof To keep pace with mobile, social, and location-based services, smart technologists are using middleware to innovate. Spring Training Knowledge and communication help Jackson Hewitt’s Tim Bechtold get seasonal workers in top shape. Keeping Online Customers Happy Customers worldwide are comfortable with online service—but are companies meeting customers’ needs?

    Read the article

  • Distinguishing the Transport Layer from the Session Layer

    In the OSI communications model, the Session layer manages the creation and removal of the association between two communicating network end points. This can also be called a connection. A connection is maintained while the network end points are communicated between each other in a conversation or session of some unknown length of time. Some connections and sessions only need to last long enough to send a message or a piece of data in one direction. In addition, some sessions may last longer, this typically occurs when one or both of the network end points are  able to terminate it the connection. The transport layer ensures that messages/data are delivered without errors, in sequence, and with no data content losses or data content duplications. It relieves the Session Layer from any concern with the transfer of data between them and their peers. Examples: Session Layer This layer acts like a manager and asks one of its employees (Transport Layer) to move a piece of data/message from one network end-point to another. Transportation Layer This layer takes the request of the Session Layer and moves the data as insturcted, it also double checks the data for any missing or corrupted information after it has been moved. If for some reason the new data does not match the old data then the Transport player will attempt to move the data gain until the data at both locations is in sync.

    Read the article

  • Run database checks but omit large tables or filegroups - New option in Ola Hallengren's Scripts

    - by Greg Low
    One of the things I've always wanted in DBCC CHECKDB is the option to omit particular tables from the check. The situation that I often see is that companies with large databases often have only one or two very large tables. They want to run a DBCC CHECKDB on the database to check everything except those couple of tables due to time constraints. I posted a request on the Connect site about time some time ago: https://connect.microsoft.com/SQLServer/feedback/details/611164/dbcc-checkdb-omit-tables-option The workaround from the product team was that you could script out the checks that you did want to carry out, rather than omitting the ones that you didn't. I didn't overly like this as a workaround as clients often had a very large number of objects that they did want to check and only one or two that they didn't. I've always been impressed with the work that our buddy Ola Hallengren has done on his maintenance scripts. He pinged me recently about my old Connect item and said he was going to implement something similar. The good news is that it's available now. Here are some examples he provided of the newly-supported syntax: EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKDB' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKALLOC,CHECKTABLE,CHECKCATALOG', @Objects = 'ALL_OBJECTS,-AdventureWorks.Person.Address' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'AdventureWorks.PRIMARY' EXECUTE dbo.DatabaseIntegrityCheck @Databases = 'AdventureWorks', @CheckCommands = 'CHECKFILEGROUP,CHECKCATALOG', @FileGroups = 'ALL_FILEGROUPS,-AdventureWorks.PRIMARY' Note the syntax to omit an object from the list of objects and the option to omit one filegroup. Nice! Thanks Ola! You'll find details here: http://ola.hallengren.com/  

    Read the article

  • Data Pump: Consistent Export?

    - by Mike Dietrich
    Ouch ... I have to admit as I did say in several workshops in the past weeks that a data pump export with expdp is per se consistent. Well ... I thought it is ... but it's not. Thanks to a customer who is doing a large unicode migration at the moment. We were discussing parameters in the expdp's par file. And I did ask my colleagues after doing some research on MOS. And here are the results of my "research": MOS Note 377218.1 has a nice example showing a data pump export of a partitioned table with DELETEs on that table as inconsistent Background:Back in the old 9i days when Data Pump was designed flashback technology wasn't as popular and well known as today - and UNDO usage was the major concern as a consistent per default export would have heavily relied on UNDO. That's why - similar to good ol' exp - the export won't operate per default in consistency mode To get a consistent data pump export with expdp you'll have to set: FLASHBACK_TIME=SYSTIMESTAMPin your parameter file. Then it will be consistent according to the timestamp when the process has been started. You could use FLASHBACK_SCN instead and determine the SCN beforehand if you'd like to be exact. So sorry if I had proclaimed a feature which unfortunately is not there by default - Mike

    Read the article

  • configuring USB modem( Huawei EC156) in ubuntu 13.10

    - by user205427
    I am facing difficulty in installing my USB modem in Ubuntu 13.10. Contrary to what many have suggested,it does not get detected automatically, nor does setting a new connection help. USB Device is listed in lsusb, but not under network manager or Devices, it is detected as a CD-ROM, what I understood from the web was that usb-modeswitch can be used to switch it to a USB device. Even 'Enable Mobile Broadband' option is not shown in network manager. What was interesting is when I start laptop with windows 7 and use the USB modem and after that restart with Ubuntu, both Enable Broadband and the mobile broadband connection can be seen. Sadly, internet connection could not be installed. I tried using USB-modeswitch command as suggested somewhere, but it does not seem to work. Following is the message. Take all parameters from the command line * usb_modeswitch: handle USB devices with multiple modes * Version 2.0.1 (C) Josua Dietze 2013 * Based on libusb1/libusbx ! PLEASE REPORT NEW CONFIGURATIONS ! DefaultVendor= 0x12d1 DefaultProduct= 0x1505 HuaweiMode=1 NeedResponse=0 InquireDevice enabled (default) Look for default devices ... found USB ID 8087:0020 found USB ID 1d6b:0002 found USB ID 0461:4db6 found USB ID 12d1:1505 vendor ID matched product ID matched found USB ID 138a:0007 found USB ID 03f0:231d found USB ID 8087:0020 found USB ID 1d6b:0002 Found devices in default mode (1) Access device 005 on bus 001 Get the current device configuration ... OK, got current device configuration (1) Use interface number 0 Use endpoints 0x08 (out) and 0x87 (in) Inquire device details; driver will be detached ... Looking for active driver ... OK, driver detached INQUIRY message failed (error -9) USB description data (for identification) ------------------------- Manufacturer: HUA?WEI TECHNOLOGIES Product: HUAWEI Mobile Serial No.: ??????????????????? ------------------------- Send old Huawei control message ... -> Run lsusb to note any changes. Bye! I am stuck with this problem for 4 days now, any help would be appreciated

    Read the article

  • Site Web Analytics not updating Sharepoint 2010

    - by Rohit Gupta
    If you facing the issue that the web Analytics Reports in SharePoint 2010 Central Administration is not updating data. When you go to your site > site settings > Site Web Analytics reports or Site Collection Analytics reports  You get old data as in the ribbon displayed "Data Last Updated: 12/13/2010 2:00:20 AM" Please insure that the following things are covered: Insure that Usage and Data Health Data Collection service is configured correctly. Log Collection Schedule is configured correctly Microsoft Sharepoint Foundation Usage Data Import and Microsoft SharePoint Foundation Usage Data Processing Timer jobs are configured to run at regular intervals One last important Timer job is the Web Analytics Trigger Workflows Timer Job insure that this timer job is enabled and scheduled to run at regular intervals (for each site that you need analytics for). After you have insured that the web analytics service configuration is working fine and the Usage Data Import job is importing the *.usage files from the ULS LOGS folder into the WSS_Logging database, and that all the required timer jobs are running as expected… wait for a day for the report to get updated… the report gets updated automatically at 2:00 am in the morning… and i could not find a way to control the schedule for this report update job. So be sure to wait for a day before giving up :)

    Read the article

  • Problem installing drivers for Brother DCP-1400

    - by ToddB
    I have a problem installing drivers for my DCP-1400 printer. I am using Ubuntu 11.10 and am new to Ubuntu and Linux in general. I downloaded the lpr and cupswrapper files from the Brother linux site into my Download directory. I next run Terminal and switch to the Downloads directory. At the prompt I type in the following: sudo dpkg -i --force-all dcp1400lpr-1.1.2-1.386.deb It asks for my password which I type in and press Enter. I get the following errors: (Reading database ... 155195 files and directories currently installed.) Preparing to replace dcp1400lpr 1.1.2-1 (using dcp1400lpr-1.1.2-1.i386.deb) ... Unpacking replacement dcp1400lpr ... /var/lib/dpkg/info/dcp1400lpr.postrm: 3: /etc/init.d/lpd: not found dpkg: warning: subprocess old post-removal script returned error exit status 127 dpkg - trying script from the new package instead ... /var/lib/dpkg/tmp.ci/postrm: 3: /etc/init.d/lpd: not found dpkg: error processing dcp1400lpr-1.1.2-1.i386.deb (--install): subprocess new post-removal script returned error exit status 127 /var/lib/dpkg/tmp.ci/postrm: 3: /etc/init.d/lpd: not found dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 127 Errors were encountered while processing: dcp1400lpr-1.1.2-1.i386.deb I now notice when I open the Synaptics package manager I get the message: E: The package dcp1400lpr needs to be reinstalled, but I can't find an archive for it. E: Internal error opening cache (1). Please report. Any help provided would be appreciated. Thank you TB

    Read the article

  • Ubuntu Lenove OCZ Agility3 - No Grub after install

    - by Michael
    I've tried a dualboot (Win7 + Ubuntu) installation it on a Lenovo E330 with Agility3 240 Gigs... Conclusions: Ubuntu:: Ubuntu 12.04 x86_64 ( 21.06.2012 ) is not able to install grub in a bootable way. Grub will be installed and does update-grub during Installation, recognizes also the Win OS. But after a restart it boots directly to Windows.This is directly connected to the OCZ Agility3. On a good old fashion harddisk (those with the moving parts) Ubuntu is capable to install grub with no problem in a bootable manner. PinguyOS:: PinguyOS 12.04 LTS x86_64 (which is a Ubuntu based distro) is able to handle the Grub installation on OCZ Agility3. However they both use Grub 1.99... What i did:: Installed Windows. Installed Ubuntu. Installed PinguyOS. Grub Updates:: Grub updates are only through Pinguy OS possible, this means you have to edit the Ubuntu Grub entries manually after Kernelupdates on Ubuntu, in the PiguyOS sytem.. What i've already tried: Firmwareupgrade OCZ (livestick, successfull) Install Ubuntu Grub to sda Install Ubuntu Grub to sdc (Ubuntu Partion) Install Ubuntu Grub to /boot update-grub manually after install restore grub any ideas appreciated..

    Read the article

  • How do I find a programming internship / practice?

    - by user828584
    I'm taking the SAT soon, and quickly heading toward the chaos of figuring out which college's I will be able to attend, and how on Earth I'll be able to afford it. I would like to be able to gain some experience in programming or web development, but I don't know where to look. I've been trying my best to learn over the past year, and have been doing alright in C# and the web languages (HTML, PHP, CSS, javascript). I have no idea where to look though. I've asked similar questions and rummaged through old questions on here, and they all say nothing specifically. The main two points are always "Contribute to open source projects" and "Find a company and ask to be a part of it." I don't know how to find either of the two. I've looked online at github and source forge, and the like, but all the projects are already so progressed and I just don't have the experience needed to bring myself up to speed with their code. I don't have much experience in code management, and I don't know how to get it. I would be ecstatic to be able to start a project with a group of more experienced members, but, like I said, I have no clue how to find these people.

    Read the article

  • Is djvubundle available in Ubuntu?

    - by Tim
    The official webpage says Assembling DjVu Images into Multipage Documents The batch compressors distributed as part of the DjVuText and DjVuLayered packages can directly produce multipage DjVu file when fed with multiple input files. The files produced are smaller than if the pages are compressed separately because the compressor can extract and share redundant information accross multiple pages. Individually compressed DjVu pages can be assembled into multipage documents using the free package DjVuMulti. To assemble a bunch of DjVu images into a single BUNDLED document simply type: djvubundle page1.djvu page2.djvu.... pageN.djvu document.djvu To assemble a bunch of DjVu images into an INDIRECT document, type: djvujoin page1.djvu page2.djvu.... pageN.djvu documentdir/index.djvu where documentdir must be an existing directory where all the individual page files will be copied. To disassemble a BUNDLED document into an INDIRECT one, simply say: djvujoin document.djvu documentdir/indexfile.djvu To convert a multipage document from one of the old 2.0 multipage formats, do djvureindex olddocument newdocument The programs djvujoin, and djvubundle supersede the 2.0 programs djvuindex and djvumerge. I couldn't find djvujoin and djvubundle for Ubuntu. djvulibre doesn't have them either. Do I miss something? Thanks.

    Read the article

  • Differences when Running with OutputCache managed module under ASP.NET IIS7.x with Cache-control header

    - by Shawn Cicoria
    This post is to report some differences when using MVC or IHttpHandlers if you’re attempting to set the Cache-control : max-age or s-maxage value under IIS7.x using the HttpResponse.Cache methods. [UPDATE]: 2011-3-14 – The missing piece was calling  Response.Cache.SetSlidingExpiration(true) as follows: context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5)); context.Response.ContentType = "image/jpeg"; context.Response.Cache.SetSlidingExpiration(true);   Under IIS7.x if you us one of the following 2 methods, you will only get a Cache-ability of “public”.  public ActionResult Image2() { MemoryStream oStream = new MemoryStream(); using (Bitmap obmp = ImageUtil.RenderImage("Respone.Cache.Setxx calls", 5, DateTime.Now)) { obmp.Save(oStream, ImageFormat.Jpeg); oStream.Position = 0; Response.Cache.SetCacheability(HttpCacheability.Public); Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5)); return new FileStreamResult(oStream, "image/jpeg"); } } Method 2 – which is just a plain old HttpHandler and really isn’t MVC3, but under the same MVC ASP.NET application, same result. public class image : IHttpHandler { public void ProcessRequest(HttpContext context) { using (var image = ImageUtil.RenderImage("called from IHttpHandler direct", 5, DateTime.Now)) { context.Response.Cache.SetCacheability(HttpCacheability.Public); context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5)); context.Response.ContentType = "image/jpeg"; image.Save(context.Response.OutputStream, ImageFormat.Jpeg); } } } Using the following under MVC3 (I haven’t tried under earlier versions) will work by applying the OutputCacheAttribute to your Action: [OutputCache(Location = OutputCacheLocation.Any, Duration = 300)] public ActionResult Image1() { MemoryStream oStream = new MemoryStream(); using (Bitmap obmp = ImageUtil.RenderImage("called with OutputCacheAttribute", 5, DateTime.Now)) { obmp.Save(oStream, ImageFormat.Jpeg); oStream.Position = 0; return new FileStreamResult(oStream, "image/jpeg"); } } To remove the “OutputCache” module, you use the following in your web.config: <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules runAllManagedModulesForAllRequests="true"> <!--<remove name="OutputCache"/>--> </modules>

    Read the article

  • Strategy for backwards compatibility of persistent storage

    - by Baqueta
    In my experience, trying to ensure that new versions of an application retain compatibility with data storage from previous versions can often be a painful process. What I currently do is to save a version number for each 'unit' of data (be it a file, database row/table, or whatever) and ensure that the version number gets updated each time the data changes in some way. I also create methods to convert from v1 to v2, v2 to v3, and so on. That way, if I'm at v7 and I encounter a v3 file, I can do v3-v4-v5-v6-v7. So far this approach seems to be working out well, but I haven't had to make use of it extensively yet so there may be unforseen problems. I'm also concerned that if the objects I'm loading change significantly, I'll either have to keep around old versions of the classes or face updating all my conversion methods to handle the new class definition. Is my approach sound? Are there other/better approaches I could be using? Are there any design patterns applicable to this problem?

    Read the article

  • JDK bug migration: bugs.sun.com now backed by JIRA

    - by darcy
    The JDK bug migration from a Sun legacy system to JIRA has reached another planned milestone: the data displayed on bugs.sun.com is now backed by JIRA rather than by the legacy system. Besides maintaining the URLs to old bugs, bugs filed since the migration to JIRA are now visible too. The basic information presented about a bug is the same as before, but reformatted and using JIRA terminology: Instead of a "category", a bug now has a "component / subcomponent" classification. As outlined previously, part of the migration effort was reclassifying bugs according to a new classification scheme; I'll write more about the new scheme in a subsequent blog post. Instead of a list of JDK versions a bug is "reported against," there is a list of "affected versions." The names of the JDK versions have largely been regularized; code names like "tiger" and "mantis" have been replaced by the release numbers like "5.0" and "1.4.2". Instead of "release fixed," there are now "Fixed Versions." The legacy system had many fields that could hold a sequence of text entries, including "Description," "Workaround", and "Evaluation." JIRA instead only has two analogous fields labeled as "Description" and a unified stream of "Comments." Nearly coincident with switching to JIRA, we also enabled an agent which automatically updates a JIRA issue in response to pushes into JDK-related Hg repositories. These comments include the changeset URL, the user making the push, and a time stamp. These comments are first added when a fix is pushed to a team integration repository and then added again when the fix is pushed into the master repository for a release. We're still in early days of production usage of JIRA for JDK bug tracking, but the transition to production went smoothly and over 1,000 new issues have already been filed. Many other facets of the migration are still in the works, including hosting new incidents filed at bugs.sun.com in a tailored incidents project in JIRA.

    Read the article

  • Can anyone recommend an AI sandbox?

    - by user19433
    I'm passionate person, who has been around AI from a long time [1] but never going in deep enough. Now it's time! I've been really looking for some way to concentrate on AI coding but couldn't succeeded to find an AI environment I can focus on. I just want to use an AI sandbox environment which would let me have tools like: visibility information character controller able to easily define a level, with obstacles of course physics collider management triggers management don't need to be a shiny, eye candy graphical render : this is about pathfinding, tactical reasoning, etc.. I have tried : Unreal Dev Kit : while the new release announce is about C++ coding, this is about external tools and will be released in 2013 Cry Engine : really interesting as AI is presents here but coding with it appears to be an hell: did I got it wrong ? Half Life source, C4, Torque, Dx Studio : either quite old, not very useful or costly these imply to dig in documentation (when provided) to code everything, graphics included. Unity 3D : the most promising platform. While you also need to create your own environment, there are lot of examples. The disadvantage is, in addition to spend time to have this env. working, is the languages choice : C#, Javascript or Boo. C# is not that hard, but this implies you'll allways have to convert papers (I love those from Lars Linden) books codes, or anything you can have in Aigamedev are most often in C++. This is extra work. I've look at "Simple Path", the very good Arong Greenberg work but no source provided and AngryAnt work. AI Sandbox : this seems to be exactly what as AI coder I want to use. I saw some preview but from 2009 we still don't know what it will be about precisely, will it be opensource or free (I strongly doubt), will I be able to buy it? will it really provide me tools I need to focus on AI ? That being said, what is the best environment to be able to focus on AI coding only, is it even possible?

    Read the article

  • What is the best way to track / record the current programming project you work on? [duplicate]

    - by user2424160
    This question already has an answer here: Methodology for Documenting Existing Code Base 6 answers When do you start documenting the code? 13 answers Where should a programmer explain the extended logic behind the code? 5 answers I have been in this problem for long time and I want to know how it's done in real / big companies project? Suppose I have the project to build a website. Now I divide the project into sub tasks and do it. But you know that suppose I have task1 in hand like export the page to pdf. Now I spend 3 days to do that , came across various problems, many Stack Overflow questions and in the end I solve it. Now 4 months after someone told me that there is some error in the code. Now by that I completely forgot about (60%) of how I did it and why I do this way. I document the code but I can't write the whole story of that in the code. Then I have to spend much time on code to find what was the problem so that I added this line etc. I want to know that is there any way that i can log steps in completing the project. So that I can see how I end up with code, what errors I got, what questions I asked on SO and etc. How people do it in real time? Which software to use? I know in our project management software called JIRA we have tasks but that does not cover what steps I took to solve that tasks. What is the best way so that when I look back at my 2 year old project, I know how I solve particular task?

    Read the article

  • How to restore Windows7 after restore ubuntu bootloader?

    - by Mateusz Rogulski
    At first I will describe my situation in a few points: I have installed Windows7, and then Ubuntu 11.04 on my machine. Then everything works fine and at start of system I have screen from linux where I can choose the system. Then I reinstall Windows7 and install Windows 8 on other partition. Then I can choose between Win7 and win8 when I start system. Then I need my Ubuntu back so I want restore my bootloader from Ubuntu. I boot Ubuntu from USB and in terminal write this commands: sudo fdisk -l Then I get: /dev/sda1 1 13 104391 de Dell Utility /dev/sda2 14 2805 22425601 5 Rozszerzona /dev/sda3 * 2805 41968 314572800 7 HPFS/NTFS /dev/sda4 41968 60802 151282688 7 HPFS/NTFS /dev/sda5 14 2445 19530752 83 Linux /dev/sda6 2445 2805 2893824 82 Linux swap / Solaris Next commands: sudo mount /dev/sda5 /mnt sudo mount --bind /dev /mnt/dev sudo mount --bind /proc /mnt/proc sudo chroot /mnt grub-install /dev/sda I get Installation finished. No error reported.. And when I start my machine I have old Ubuntu start screen to choose system. Ubuntu works well. But There are no Windows 8 option. But my primary problem is when I choose Windows 7 I have: error: no such device ... error: no such disk so I have no idea what can I do. I really need both systems to work. Any help would be appreciated.

    Read the article

  • What triggered the popularity of lambda functions in modern mainstream programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? IMPORTANT NOTE The focus of this question is the introduction of anonymous functions in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about "the adoption of the functional paradigm and its benefits", because this is not the topic of the question.

    Read the article

  • Ubuntu 10.04 LTS Server - fresh install - failed apt-get update

    - by user87227
    Good day and greetings to all, I just did a fresh installation of Ubuntu 10.04 LTS server without any issues. However, the apt-get update or aptitude update is giving the following errors: a. bzip2:(stdin) is not bzip2 file.ign for all lines plus the following errors : etched 3,582B in 0s (74.1kB/s) Reading package lists... W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //security.ubuntu.com lucid-security Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: //in.archive.ubuntu.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: in.archive.ubuntu.com lucid-updates Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch security.ubuntu.com/ubuntu/dists/lucid-security/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid/Release W: Failed to fetch in.archive.ubuntu.com/ubuntu/dists/lucid-updates/Release W: Some index files failed to download, they have been ignored, or old ones used instead. Please guide in resolving this error. TIA Regards Venu

    Read the article

  • What is the advantage to using a factor of 1024 instead of 1000 for disk size units?

    - by Joe Z.
    When considering the disk space of a storage medium, normally the computer or operating system will represent it in terms of powers of 1024 - a kilobyte is 1,024 bytes, a megabyte is 1,048,576 bytes, a gigabyte is 1,073,741,824 bytes, and so on. But I don't see any practical reason why this convention was adopted. Usually when disk size is represented in kilo-, mega-, or giga-bytes, it has to be converted into decimal first. In places where a power-of-two byte count actually matters (like the block size on a file system), the size is given in bytes anyway (e.g. 4096 bytes). Was it just a little aesthetic novelty that computer makers decided to adopt, but storage medium vendors decided to disregard? Whenever you buy a hard drive, there's always a disclaimer nowadays that says "One gigabyte means one billion bytes". It would feel like using the binary definition of "gigabyte" would artificially inflate the byte count of a device, making drive-makers have to pack 1.1 terabytes into a drive in order to have it show up as "1 TB", or to simply pack 1 terabyte in and have it show up as "931 GB" (and most of them do the latter). Some people have decided to use units like "KiB" or "MiB" in favour of "KB" and "MB" in order to distinguish the two. But is there any merit to the binary prefixes in the first place? There's probably a bit of old history I'm not aware of on this topic, and if there is, I'm looking for somebody to explain it. (Apologies if this is in the wrong place. I felt that a question on best practice might belong here, but I have faith that it will be migrated to the right place if it's incorrect.)

    Read the article

  • What are the most commonly used and basic Apache htaccess redirects?

    - by bybe
    This question is here so we can offer users who are looking for information on how to make one or more common or basic redirects in Apache using the htaccess file. All future questions pertaining to finding information that is that is covered by the question should be closed as a duplication of this question. As per this Meta question. Whats the point in this question? The idea while not perfect is catch the most commonly asked questions regarding redirects using the htaccess on the Apache platform either on some type of lamp or a live server. The type of answers should be generally those that you could imagine are used by 100,000’s of sites world-wide and are constantly asked here at Pro Webmasters repeatedly over and over in various forms. A few examples of the type of answers we are looking for: How can I redirect non-www to www? How can I redirect a sub domain to the main domain? How can I redirect a sub folder from domain to a root or a subdomain? How can I redirect an old URL to a new URL? A few examples of the types of answers that we are not looking for: Answers that do not involve a redirect. Any answers relating to NGinx, IIS or any other non-Apache platform. Answers that involve custom and complex string or query removals. Resources for Advanced to Complex Mod_Rewrite Rules: Everything you ever wanted to know about mod rewrite rules but were afraid to ask Please note: that this question is still in construction and may need some refining either by myself or a real moderator of Pro Webmasters, if you have any concerns or questions please use the meta page I made a few days back here.

    Read the article

  • Removing 301 redirect from site root

    - by Jon Clements
    I'm having a look at a friends website (a fairly old PHP based one) which they've been advised needs re-structuring. The key points being: URLs should be lower case and more "friendly". The root of the domain should be not be re-directed. The first point I'm happy with (and the URLs needed tidying up anyway) and have a draft plan of action, however the second is baffling me as to not only the best way to do it, but also whether it should be done. Currently http://www.example.com/ is redirected to http://www.example.com/some-link-with-keywords/ using the follow index.php in the root of the Apache2 instance. <?php $nextpage = "some-link-with-keywords/"; header( "HTTP/1.1 301 Moved Permanently" ); header( "Status: 301 Moved Permanently" ); header("Location: $nextpage"); exit(0); // This is Optional but suggested, to avoid any accidental output ?> As far as I'm aware, this has been the case for around three years -- and I'm sorely tempted to advise to not worry about it. It would appear taking off the 301 could: Potentially affect page ranking (as the 'homepage' would disappear - although it couldn't disappear because of the next point...) Introduce maintainance issues as existing users would still have the re-directed page in their cache Following the above, introduce duplicate content Confuse Google/other SE's as to what the homepage actually is now I may be over-analysing this but I have a feeling it's not as simple as removing the 301 from the root, and 301'ing the previous target to the root... Any suggestions (including it's not worth it) are sincerely appreciated.

    Read the article

  • Failed to download repository information (Maveric)

    - by Rhiannon
    I have been through most of the duplicates for this question, and still can't find an answer. I may have missed one but hopefully this isn't a duplicate! Having a problem with updates. I get the "failed to download..."message followed by "Check your internet connection", which is clearing working fine as I am on it now. I click details and get the following **W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http://archive.ubuntu.com/ubuntu/dists/maverick-updates/universe/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-updates/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/multiverse/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/universe/source/Sources 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/multiverse/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , W:Failed to fetch http:// archive.ubuntu.com/ubuntu/dists/maverick-security/universe/binary-i386/Packages 404 Not Found [IP: 91.189.92.202 80] , E:Some index files failed to download. They have been ignored, or old ones used instead.** All the faults have "maveric" somewhere in them, so I have gone to settings and unticked all the Mavarics I can find, but this problem is still happening. Any ideas? Many thanks

    Read the article

  • What did you learn today?

    - by Rajesh Pillai
    What did you learn today? Everyday teaches you something, some lesson or the other. Some day you learn a new language, a new skill or a new hobby or visit some new place, learn music, have a different dining experience, learn swimming, make some good friends, get in touch with some old friend etc. etc…. Each of these things teaches you something… So, what did you learn today? Some of the learnings from my past weeks are outlined below… Respect others. Don’t underestimate them. (Though I never consciously do so) Be careful with your words because words have different meanings if the context is not clear. Spend some time for your personal stuff and allow others do so. Every individual is different, their skills different, their thoughts are different, their perceptions are different. So, be polite. Time management. (This is a tough skill to master). At the end of the day I keep looking for more time so may be you. So, again What did you learn today? This reflection is important because if you don’t know what you are learning at every stage in your life, then your are not learning and not growing. In short you are not living. Learning is not memorization but it is self realization….. Happy learning!!!

    Read the article

< Previous Page | 325 326 327 328 329 330 331 332 333 334 335 336  | Next Page >