Search Results

Search found 27083 results on 1084 pages for 'having'.

Page 472/1084 | < Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >

  • How to install Ubuntu 13.10 on Hybrid Disk alongside Windows 8.1

    - by user205691
    I am having trouble installing Ubuntu 13.10 on HP Envy 4-1046tx ultrabook. When i bought this, it came with windows 7 pre-installed, but i upgraded it to 8 and now recently to 8.1. But somehow, i feel 8.1 is slower or something went wrong with the upgrade and made my system slow. I want to try Dual booting Ubuntu 13.10 with windows 8.1 The system recovery drive has windows 7 recovery files. SSD has 4GB allocated to windows 8 (i think for hibernation/rapid start). 25GB of SSD is free and i want to install ubuntu on this SSD pointing it to "/" I will also shrink the windows partition (the only other partition available apart from recovery & SSD) to free up 100GB and allocate this space to "/home" during ubuntu installation. I tried the above steps while on windows 8, but not successful. Ubuntu installation went fine, but the grub was not loaded. I tried to deploy linux via EasyBCD, but after that also, selecting linux in the boot would load grub on command prompt and do nothing. While ubuntu installation, i also deleted the raid drivers with sudo dmraid -rE, but still ubuntu didnt recognize my windows. I think i am missing some steps, so this time i want to do it right with proper info before starting the process. My requirements: dual boot Ubuntu with windows 8.1 c:\ shrinked windows with 300GB on sda1, 100GB for /home on sda1 & ubuntu installed on 25GB SSD volume sda2 (this is mSata i think) GRUB or EFI that helps me load both OS properly without breaking anything SWAP partition can be added if needed on sda1 (4gb?)? I have backed up my drive and have a 16GB usb3.0 with ubuntu loaded. I hope i have mentioned everything i need and know.. All i need now is some guidance and what to do right so that this installation goes as planned :)

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based?

    - by Parhs
    Hello guys... I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first expirience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year i worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally i believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can tek heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Ofcourse browsers support shortcuts but the way to overide the defaults that browsers uses isnt cross compatible... That is a hudge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness?What kind of application would you use?

    Read the article

  • How ReSharper saved the day

    - by Randy Walker
    The Back Story: As a Microsoft MVP awardee, one of the many benefits is free software, books, and various products.  Some of the producers/manufacturers ask for reviews in exchange, others just ask for a brief mention (nothing is ever really free).  But considering that some of the products are essential to my everyday computing, I never mind mentioning their names and evangelizing their products. One of these tools just happened to save me a countless number of hours.  With the release of Microsoft’s Visual Studio 2010, JetBrains released their new 5.0 version of ReSharper. The Story: My specialty is Visual Basic development.  I am not, and probably will never be a C# developer.  As such, trying to figure out how to debug a C# project, that was written 2 years ago by a contract developer, let’s just say it’s a painful process. I have a special class for config file reading and writing, written in C#.  I kept getting exceptions when the reader would get to a line that had an xml comment in it.  It took me a couple of hours to narrow down where it was happening and why, but I couldn’t figure out the best way to fix it.  It was a for loop that was implicitly casting the type of the variable.  I knew I need to explicitly cast the variable type, but only after the type was verified.  So after I finally got some of the code written, ReSharper gave me some suggestions on how to write the code better. One of the ways was to safely cast the variable into the type I wanted.  Blammo, no more exceptions in a way I hadn’t anticipated.  Instead of having to check the type before I cast it.  Beautiful, simple, and taught me a better way to code C#. Kudos JetBrains … now if it only worked better with VB (then it could be called ReBasic, ReVB, RE???)

    Read the article

  • A Generic RIDC Test Program

    - by Kevin Smith
    Many times I have found it useful to use a java program that communicates with WebCenter Content (WCC) using RIDC for testing. I might not have access to the web GUI or need to test a service running as a specific user. In the past I had created a number of "one off" programs that submitted specific services, e.g GET_SEARCH_RESULTS, DOCINFO, etc. Recently I decided to create a generic RIDC test program that could submit any service with the desired parameters based on a configuration file. The programs gets the following information from the configuration file: WCC connection information (host, port) User to use to run service Service to run Any parameters for the service The program will make a connection to the WCC server, send the service request, and print the results of the service call using the getResponseAsString() method. Here is a sample configuration file: ridc.host=localhostridc.port=4444ridc.user=sysadminridc.idcservice=GET_SEARCH_RESULTSidcservice.QueryText=dDocType <matches> `Document`idcservice.SortField=dDocNameidcservice.SortDesc=ASC There is a readme file included in the zip with instructions for how to configure and run the program. The program takes one command line argument, the configuration file name. The configuration file name is optional and defaults to config.properties. If you have any suggestions for improvements let me know. Right now it only submits a single service call each time you run it. One enhancement I have already thought about would be to allow you to specify multiple services to tun in the configuration file. You can do that with the current program by having multiple configuration files and running the program multiple times, each with a different configuration file. You can download the program here.

    Read the article

  • Podcast Show Notes: The Fusion Middleware A-Team and the Chronicles of Architecture

    - by Bob Rhubart
    If you pay any attention at all to the Oracle blogosphere you’ve probably seen one of several blogs published by members of a group known as the Oracle Fusion Middleware A-Team. A new blog, The Oracle A-Team Chronicles, was recently launched that combines all of those separate A-Team blogs in one. In this program you’ll meet some of the people behind the A-team and the creation of that new blog. The Conversation Listen to Part 1: Background on the A-Team - When was it formed? What is it’s mission? 54) What are some of the most common challenges A-Team architects encounter in the field? Listen to Part 2 (July 3): The panel discusses the trends - big data, mobile, social, etc - that are having the biggest impact in the field. Listen to Part 3 (July 10): The panelists discuss the analysts, journalists, and other resources they rely on to stay ahead of the curve as the technology evolves, and reveal the last article or blog post they shared with other A-team members. The Panelists Jennifer Briscoe: Senior Director, Oracle Fusion Middleware A-Team Clifford Musante: Lead Architect, Application Integration Architecture A-Team, webmaster of the A-Team Chronicles Mikael Ottosson: Vice President, Oracle Fusion Apps and Fusion Middleware A-Team and Cloud Applications Group Pardha Reddy: Senior director of Oracle Identity Management and a member of the Oracle Fusion Middleware A-team Coming Soon Data Warehousing and Oracle Data Integrator: Guest producer and Oracle ACE Director Gurcan Orhan selected the topic and panelists for this program, which also features Uli Bethke, Michael Rainey, and Oracle ACE Cameron Lackpour. Java and Oracle ADF Mobile: An impromptu roundtable discussion featuring QCon New York 2013 session speakers Doug Clarke, Frederic Desbiens, Stephen Chin, and Reza Rahman. Stay tuned:

    Read the article

  • How to manage long running background threads and report progress with DDD

    - by Mr Happy
    Title says most of it. I have found surprising little information about this. I have a long running operation of which the user wants to see the progress (as in, item x of y processed). I also need to be able to pause and stop the operation. (Stopping doesn't rollback the items already processed.) The thing is, it's not that each item takes a long time to get processed, it's that that there are usually a lot of items. And what I've read about so far is that it's somewhat of an anti-pattern to put something like a queue in the DB. I currently don't have any messaging system in place, and I've never worked with one either. Another thing I read somewhere is that progress reporting is something that belongs in the application layer, but it didn't go into the details. So having said all this, what I have in mind is the following. User request with list of items enters the application layer. Application layer gets some information from the domain needed to process the items. Application layer passes the items and the information off to some domain service (should the implementation of this service belong in the infrastructure layer?) This service spins up a worker thread with callbacks for both progress reporting and pausing/stopping it. This worker thread will process each item in it's own UoW. This means the domain information from earlier needs to be stored in some DTO. Since nothing is really persisted, the service should be singleton and thread safe Whenever a user requests a progress report or wants to pause/stop the operation, the application layer will ask the service. Would this be a correct solution? Or am I at least on the right track with this? Especially the singleton and thread safe part makes the whole thing feel icky.

    Read the article

  • Pattern for a class that does only one thing

    - by Heinzi
    Let's say I have a procedure that does stuff: void doStuff(initalParams) { ... } Now I discover that "doing stuff" is quite a compex operation. The procedure becomes large, I split it up into multiple smaller procedures and soon I realize that having some kind of state would be useful while doing stuff, so that I need to pass less parameters between the small procedures. So, I factor it out into its own class: class StuffDoer { private someInternalState; public Start(initalParams) { ... } // some private helper procedures here ... } And then I call it like this: new StuffDoer().Start(initialParams); or like this: new StuffDoer(initialParams).Start(); And this is what feels wrong. When using the .NET or Java API, I always never call new SomeApiClass().Start(...);, which makes me suspect that I'm doing it wrong. Sure, I could make StuffDoer's constructor private and add a static helper method: public static DoStuff(initalParams) { new StuffDoer().Start(initialParams); } But then I'd have a class whose external interface consists of only one static method, which also feels weird. Hence my question: Is there a well-established pattern for this type of classes that have only one entry point and have no "externally recognizable" state, i.e., instance state is only required during execution of that one entry point?

    Read the article

  • Making files generally available on Linux system (when security is relatively unimportant)?

    - by Ole Thomsen Buus
    Hi, I am using Ubuntu 9.10 on a stationary PC. I have a secondary 1 TB harddrive with a single big logical partition (currently formatted as ext4). It is mounted as /usr3 with options user, exec in /etc/fstab. I am doing highspeed imaging experiments. Well, only 260fps, but that still creates many individual files since each frames is saved as one png-file. The stationary is not used by anyone other than me which is why the default security model posed by ubuntu is not necessary. What is the best way to make the entire contents of /usr3 generally available on all systems. In case I need to move the harddrive to another Ubuntu 9.x or 10.x machine? When grabbing image with the firewire camera I use a selfmade grabbing software-utility (console based) in sudo-mode. This creates all files with root as owner and group. I am logged in as user otb and usually I do the following when having to make files generally available to otb: sudo chown otb -R * sudo chgrp otb -R * sudo chmod a=rwx -R * This takes some time since the disk now contains individual ~200000 files. After this, how would linux behave if I moved the harddrive to another system where the user otb is also available? Would the files still be accessible without sudo use?

    Read the article

  • iOS and Server: OAuth strategy

    - by drekka
    I'm trying to working how to handle authentication when I have iOS clients accessing a Node.js server and want to use services such as Google, Facebook etc to provide basic authentication for my application. My current idea of a typical flow is this: User taps a Facebook/Google button which triggers the OAuth(2) dialogs and authenticates the user on the device. At this point the device has the users access token. This token is saved so that the next time the user uses the app it can be retrieved. The access token is transmitted to my Node.js server which stores it, and tags it as un-verified. The server verifies the token by making a call to Facebook/google for the users email address. If this works the token is flagged as verified and the server knows it has a verified user. If Facebook/google fail to authenticate the token, the server tells iOS client to re-authenticate and present a new token. The iOS client can now access api calls on my Node.js server passing the token each time. As long as the token matches the stored and verified token, the server accepts the call. Obviously the tokens have time limits. I suspect it's possible, but highly unlikely that someone could sniff an access token and attempt to use it within it's lifespan, but other than that I'm hoping this is a reasonably secure method for verification of users on iOS clients without having to roll my own security. Any opinions and advice welcome.

    Read the article

  • K-12 and Cloud considerations

    - by user736511
    Much like every other Public Sector organization, school districts in the US and Canada are under tremendous pressure to deliver consistent and modern services while operating with reduced budgets, IT personnel shortages, and staff attrition.  Electronic/remote learning and the need for immediate access to resources such as grades, calendars, curricula etc. are straining IT environments that were already burdened with meeting privacy requirements imposed by both regulators and parents/students.  One area viewed as a solution to at least some of the challenges is the use of "Cloud" in education.  Although the concept of "Cloud" is nothing new in education with many providers supplying educational material over the web, school districts defer previously-in-house-hosted services to established commercial vendors to accommodate document sharing, app hosting, and even e-mail.  Doing so, however, does not reduce an important risk, that of privacy.  As always, Cloud implementations are viewed in a skeptical manner because of the perceived reduction in sensitive data management and protection thereof, although with a careful approach and the right tooling, the benefits realized by Clouds can expand to security and privacy.   Oracle's comprehensive approach to data privacy and identity management ensures that the necessary tools are available to support regulations, operational efficiencies and strong security regardless of where the sensitive data is stored - on premise or a Cloud.  Common management tools, role-based access controls, access policy management and engineered systems provided by Oracle can be the foundational pieces on which school districts can build their Cloud implementations without having to worry about security itself. Their biggest challenge, and it is a positive one, is how to best take advantage of Oracle's DB Security and IDM functionality to reduce operational costs while enabling modern applications and data delivery to those who needs access to it. For more information please refer to http://www.oracle.com/us/products/middleware/identity-management/overview/index.html and http://www.oracle.com/us/products/database/security/overview/index.html.

    Read the article

  • Branching and CI Builds with Agile

    - by Bob Horn
    We follow many agile processes, including automated tests, continuous integration, sprint reviews, etc... We're currently having a debate about how often we should branch release builds. We've been doing two-week sprints and trying to deploy to production at the end of each sprint. Some of us think we should be branching every sprint. Some of us think that's overkill. If a project encompasses three Visual Studio solutions, and we branch every sprint, then that's three branches, and three CI builds to create every two weeks. If we do this for six months, we'll end up with 36 branches and 36 CI builds. There is overhead involved in that. For those of us that think that branching every sprint is overkill, we don't have a very good alternative. On my last project, we deployed some solutions from the Main trunk. Yeah, that's not good, but it saved on some of the overhead. What's the right way to manage branching/releasing and CI builds, using agile, when we have such short (two-week) sprint cycles?

    Read the article

  • Should a stack trace be in the error message presented to the user?

    - by Vilx-
    I've got a bit of an argument at my workplace and I'm trying to figure out who is right, and what is the right thing to do. Context: an intranet web application that our customers use for accounting and other ERP stuff. I'm of the opinion that an error message presented to the user (when things crash) should include as much information as possible, including the stack trace. Of course, it has to start with a nice "An Error has occurred, please submit the below information to the developers" in large, friendly letters. My reasoning is that a screenshot of the crashed application will often be the only easily available source of information. Sure, you can try to get a hold of the client's systems administrator(s), attempt to explain where your log files are, etc, but that will probably be slow and painful (talking to the client representatives mostly is). Also, having an immediate and full information is extremely useful in development, where you don't have to go hunting through the log files to find what you need on every exception. (But that could be solved with a configuration switch.) Unfortunately there has been some kind of "Security audit" (no idea how they did that without the sources... but whatever), and they complained about the full exception messages citing them as a security threat. Naturally, the clients (at least one that I know of) has taken this at face value and now demands that the messages be cleaned. I fail to see how a potential attacker could use a stack trace to figure anything out he couldn't have figured out before. Are there any examples, any documented proof of anyone ever doing that? I think that we should fight this foolish idea, but perhaps I'm the fool here, so... Who's right?

    Read the article

  • Removing 301 redirect from site root

    - by Jon Clements
    I'm having a look at a friends website (a fairly old PHP based one) which they've been advised needs re-structuring. The key points being: URLs should be lower case and more "friendly". The root of the domain should be not be re-directed. The first point I'm happy with (and the URLs needed tidying up anyway) and have a draft plan of action, however the second is baffling me as to not only the best way to do it, but also whether it should be done. Currently http://www.example.com/ is redirected to http://www.example.com/some-link-with-keywords/ using the follow index.php in the root of the Apache2 instance. <?php $nextpage = "some-link-with-keywords/"; header( "HTTP/1.1 301 Moved Permanently" ); header( "Status: 301 Moved Permanently" ); header("Location: $nextpage"); exit(0); // This is Optional but suggested, to avoid any accidental output ?> As far as I'm aware, this has been the case for around three years -- and I'm sorely tempted to advise to not worry about it. It would appear taking off the 301 could: Potentially affect page ranking (as the 'homepage' would disappear - although it couldn't disappear because of the next point...) Introduce maintainance issues as existing users would still have the re-directed page in their cache Following the above, introduce duplicate content Confuse Google/other SE's as to what the homepage actually is now I may be over-analysing this but I have a feeling it's not as simple as removing the 301 from the root, and 301'ing the previous target to the root... Any suggestions (including it's not worth it) are sincerely appreciated.

    Read the article

  • Metaphor for task synchronization [closed]

    - by nkint
    I'm looking for a metaphor. A friend of mine taught me to use metaphors from nature, everyday life, math, and use them to design my projects. They can help in creating a better design or better understanding or the problem, and they are cool. Now I'm working on a project with hardware and micro-controllers in C. For convenience, I have decided to use multiple micro-controllers as co-processor units for real-time (the slaves) and a master. This has saved me a lot of headache: I can code the main logic in the master without paying too much attention to super optimizing everything; I don't care if I need some blocking-call; I don't worry about serial communication with the computer. I just send messages to the slaves and they are super fast super in real time. I like my design and it seems to work well. So here are the important concepts that I'm trying capture in the metaphor: hierarchy of processing Not using one big brain but rather several small, distributed brain units using distributed power or resources I'm looking for a good metaphor for this concept of having one unit synchronize the work of all the others. Preferably, the metaphor would come from nature, biology, or zoology.

    Read the article

  • A new mission statement for my school's algorithms class

    - by Eric Fode
    The teacher at Eastern Washington University that is now teaching the algorithms course is new to eastern and as a result the course has changed drastically mostly in the right direction. That being said I feel that the class could use a more specific, and industry oriented (since that is where most students will go, though suggestions for an academia oriented class are also welcome) direction, having only worked in industry for 2 years I would like the community's (a wider and much more collectively experienced and in the end plausibly more credible) opinion on the quality of this as a statement for the purpose an algorithms class, and if I am completely off target your suggestion for the purpose of a required Jr. level Algorithms class that is standalone (so no other classes focusing specifically on algorithms are required). The statement is as follows: The purpose of the algorithms class is to do three things: Primarily, to teach how to learn, do basic analysis, and implement a given algorithm found outside of the class. Secondly, to teach the student how to model a problem in their mind so that they can find a an existing algorithm or have a direction to start the development of a new algorithm. Third, to overview a variety of algorithms that exist and to deeply understand and analyze one algorithm in each of the basic algorithmic design strategies: Divide and Conquer, Reduce and Conquer, Transform and Conquer, Greedy, Brute Force, Iterative Improvement and Dynamic Programming. The Question in short is: do you agree with this statement of the purpose of an algorithms course, so that it would be useful in the real world, if not what would you suggest?

    Read the article

  • Why is wireless slow with Atheros AR9285?

    - by Luke
    I know there are many posts like this, however none of the fixes I have found have worked. I had the issue on 11.04, and after having no luck fixing it decided to try 12.04 however this has not fixed the problem. I'm using a Lenovo IdeaPad, the network card is a Atheros Communications AR9285. edit add outputs: sudo iwconfig lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"NETGEAR-PLOW" Mode:Managed Frequency:2.437 GHz Access Point: E0:91:F5:7D:1B:BA Bit Rate=65 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on Link Quality=66/70 Signal level=-44 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:77 Invalid misc:63 Missed beacon:0 eth0 no wireless extensions. lspci -nnk | grep -iA2 net 06:00.0 Network controller [0280]: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) [168c:002b] (rev 01) Subsystem: Lenovo Device [17aa:30a1] Kernel driver in use: ath9k -- 07:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) Subsystem: Lenovo Device [17aa:392e] Kernel driver in use: r8169 Thanks

    Read the article

  • Can't start webcam for google video services

    - by wisemonkey
    I've got Ubuntu 11.10 64 bit and have installed the google video chat plugin. However webcam doesn't seem to work (black screen -- no video at all). For cheese it works but shows really bad (black and white kinda) image. Following some link I installed guvcview if I start it then image looks neat. Any suggestions on how can it be fixed? If it helps I've tried the solution: $ sudo mv /opt/google/talkplugin/GoogleTalkPlugin /opt/google/talkplugin/GoogleTalkPlugin.old $ sudo gedit /opt/google/talkplugin/GoogleTalkPlugin and putting following lines in: #!/bin/sh LD_PRELOAD=/usr/lib32/libv4l/v4l1compat.so /opt/google/talkplugin/GoogleTalkPlugin.old OR #!/bin/sh LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libv4l/v4l1compat.so /opt/google/talkplugin/GoogleTalkPlugin.old Cause I've both files. Finally $sudo chmod +x /opt/google/talkplugin/GoogleTalkPlugin.old I closed and reopened chrome then started gmail tried video call -- black screen :-/ Ok so today finally google+ provided me with trouble shoot link and advised me: The plug-in won't install If you're having trouble installing the plug-in, or are receiving a message asking you to reinstall it, you should check to make sure your configuration is right. To do so simply: Check to make sure the Google Talk Plugin Video Accelerator and Google Talk NPAPI Plugin are enabled. If you're using Chrome you can type about:plugins in your browser to display your plug-ins. Make sure you're not using Internet Explorer 64-bit (this is a browser version that is 64 bit as opposed to 32 bit). Ensure that you don't have any "click to run" extensions enabled. If you're still experiencing this issue after checking your configuration you can follow these steps: Refresh the browser page. Close any running Google Talk plug-in processes. Close all open and running browser processes. Restart your computer. Uninstall and then reinstall the plug-in. Try a different browser such as Google Chrome or Mozilla Firefox. I looked in about:plugins for chrome and firefox: I don't have Google Talk NPAPI Plugin, does that matter? and I thought its installed with google talk plugin or no?

    Read the article

  • Prevent oversteering catastrophe in racing games

    - by jdm
    When playing GTA III on Android I noticed something that has been annoying me in almost every racing game I've played (maybe except Mario Kart): Driving straight ahead is easy, but curves are really hard. When I switch lanes or pass somebody, the car starts swiveling back and forth, and any attempt to correct it makes it only worse. The only thing I can do is to hit the brakes. I think this is some kind of oversteering. What makes it so irritating is that it never happens to me in real life (thank god :-)), so 90% of the games with vehicles inside feel unreal to me (despite probably having really good physics engines). I've talked to a couple of people about this, and it seems either you 'get' racing games, or you don't. With a lot of practice, I did manage to get semi-good at some games (e.g. from the Need for Speed series), by driving very cautiously, braking a lot (and usually getting a cramp in my fingers). What can you do as a game developer to prevent the oversteering resonance catastrophe, and make driving feel right? (For a casual racing game, that doesn't strive for 100% realistic physics) I also wonder what games like Super Mario Kart exactly do differently so that they don't have so much oversteering? I guess one problem is that if you play with a keyboard or a touchscreen (but not wheels and pedals), you only have digital input: gas pressed or not, steering left/right or not, and it's much harder to steer appropriately for a given speed. The other thing is that you probably don't have a good sense of speed, and drive much faster than you would (safely) in reality. From the top of my head, one solution might be to vary the steering response with speed.

    Read the article

  • Dreamweaver CS5 Test server works but cannot connect to host server through files window

    - by Toni
    I've been managing this site for a long time and update coupons on it approximately every 60 days. For some reason, I'm not having problems: I opened DW CS5 today and made the changes necessary to update coupons. I was able to connect to the host server with no problem but most of my coupon images were not showing up. DW tells me I have 70 broken links, which can't be the case because I've reviewed them. Some links work and are the same as the broken links other than the file name. Unable to figure it out, I thought maybe restarting my Mac would help. However, upon logging back into DW, I am now unable to connect to the host server. I get an FTP error notice that the file doesn't exist or there is a permissions problem. Funny thing is, I can connect successfully if I test the connection through the Site Management window. I have connected to my host server through FileZilla and see all the files there, unfortunately, I still can't get the web pages to display the coupons. Has anyone else had this issue and if so, what is the solution? I feel like this is probably a simple fix, but I cannot for the life of me determine what it is! If anyone knows a solution, I'd really appreciate the help! -Toni

    Read the article

  • Are bad backlinks causing thousands of 404 and 410 errors in webmaster tools?

    - by Natália
    Our webmaster tools account is showing 250.000 errors related with weird links from other sites. These URLs are comming mostly from non existent sites or are being generated directly by our website. Here some examples of these URLs: oursite.com/&q=videos+caseros+sexo+pornos+gratis&sa=X&ei=R638T8eTO8WphAfF2vG8Bg&ved=0CCAQFjAC%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F4%2Fpage%2F5%2Fpage%2F4/page/3 Our site is a popular spanish adult site, yet we don´t have keywords which are being mentioned in this URL. Apparently this link comes from our site. Some more examples: oursite.com/&q=losmejoresvideosporno&sa=X&ei=U__8T-BnqK7RBdjmhYsH&ved=0CBUQFjAA%2F%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3%2Fpage%2F4%2Fpage%2F3%2Fpage%2F2%2Fpage%2F3/page/4 Once again: not our queries, not out URLs. oursite/tag/tetonas We think that it might be other site, which is having a policy of extremely bad SEO based on other sites branding and keywords usage: thirdsite/buscador/tetonas-oursite The question is: if other sites are generating these URLs, how can we prevent this? Why the tag is being generated if no link was added to the other site? What should we do with these errors? 301? 410 gone? I have read all similar Q&A here but none of them seems to solve our problem. It is not likely to be a bad ad (Inspected them all). Maybe some all content which google decided to recrawl suddenly? Maybe third parties bad SEO policy? Maybe all of them?

    Read the article

  • Cannot access personal website from home IP. More details inside.

    - by GX67
    This is a recent problem I've been having. My site can be accessed from almost everywhere else except from my home IP, where I do most of my editing/updating, etc. I've tested my connection from my school's network, a friend's connection from out of state (multiple states), and through a tethered connection with my friend's Android. It works in all those cases, both viewing, accessing the cPanel, and using FTP. Here's the problem that happens to me when I try to view it from my home IP: The page times out in Firefox, IE, and Chrome. Using the cmd, I ran tracert and ping, both as failed attempts. Log here. downforeveryoneorjustme.com says my site is up. So do the other site checkers. I can't access my cPanel or FTP accounts. I can't access the host site. (I use perfectz.info for hosting, and I can't access their site either.) System settings: No firewall enabled. Ports are seemingly properly forwarded. (e.g. The ports are open in the router settings, and are open everywhere else.) I have an email forwarder set up from the cPanel that works just fine. (i.e. I can receive emails sent to that address. If any other information is needed, I'll do my best to provide it. UPDATE @ilhan: I use two things: 1) The site cPanel from in-browser. 2) Dreamweaver CS5 FTP. @Matthias: I tested both, and it passes the dual stack with a 10/10. What should I do then?

    Read the article

  • Internal Mic not working on Dell Adamo 13

    - by AFD
    So I'm using Elementary Luna Beta (based on Ubuntu 12.04 LTS) on a spare HDD after successfully using Jupiter release (based on 10.10) for many years. In Jupiter I had my internal mic work out of the box and with Skype installed from deb it was all setup without having to step foot inside the terminal. In Luna Beta the internal mic is not recognised by the OS and so also not recognised by Skype. I believe the hardware is this (from sudo lshw): *-multimedia description: Audio device product: 82801I (ICH9 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 03 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:46 memory:f8600000-f8603fff As well as this I ran cat /proc/asound/card0/codec* | grep Codec which gave me: Codec: IDT 92HD73C1X5 Codec: Intel Cantiga HDMI How do I tweak Luna to get this hardware working properly? I'm able to switch out my Luna HDD with the Jupiter HDD to help troubleshoot what the differences are between the two and why the more recent OS can't find / use the mic correctly. Thanks in advance for any help you can give.

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • WizMouse Enables Mouse Over Scrolling on Any Window

    - by ETC
    WizMouse is a free and lightweight Windows application that enables a simple but effective trick: the ability to scroll the contents of a window that is under your mouse cursor without shifting the focus to that window. It may not seem like much, at first glance, but the ability to scroll a window without having to click on it and shift the focus of your current window is a huge time saver. Once WizMouse is installed simply mousing over any open window and engage your scroll wheel for instant scroll with no additional click or shift in focus necessary. You’ll get so used to it you’ll forget that it wasn’t built into Windows from the start. Hit up the link below to grab a copy of WizMouse, a free and Windows only application. WizMouse [Antibody Software] Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) WizMouse Enables Mouse Over Scrolling on Any Window Enhance GIMP’s Image Editing Power with Gimp Paint Studio Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome

    Read the article

  • What is the best way to render a 2d game map?

    - by Deukalion
    I know efficiency is key in game programming and I've had some experiences with rendering a "map" earlier but probably not in the best of ways. For a 2D TopDown game: (simply render the textures/tiles of the world, nothing else) Say, you have a map of 1000x1000 (tiles or whatever). If the tile isn't in the view of the camera, it shouldn't be rendered - it's that simple. No need to render a tile that won't be seen. But since you have 1000x1000 objects in your map, or perhaps less you probably don't want to loop through all 1000*1000 tiles just to see if they're suppose to be rendered or not. Question: What is the best way to implement this efficiency? So that it "quickly/quicker" can determine what tiles are suppose to be rendered? Also, I'm not building my game around tiles rendered with a SpriteBatch so there's no rectangles, the shapes can be different sizes and have multiple points, say a curved object of 10 points and a texture inside that shape; Question: How do you determine if this kind of objects is "inside" the View of the camera? It's easy with a 48x48 rectangle, just see if it X+Width or Y+Height is in the view of the camera. Different with multiple points. Simply put, how to manage the code and the data efficiently to not having to run through/loop through a million of objects at the same time.

    Read the article

< Previous Page | 468 469 470 471 472 473 474 475 476 477 478 479  | Next Page >