Search Results

Search found 25503 results on 1021 pages for 'browser security'.

Page 544/1021 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • Is sudo dd taking too long to wipe hard drive?

    - by Adam133718
    I have a 200gb HDD which I removed from a macbook due to several corrupt files in startup. One thing led to another and I decided that I needed to format the drive. I used the command sudo dd if=/dev/zero of=/dev/sdb which is supposed to wipe everything off of the hard drive. It is my understanding that the command writes 0's over every bit on the drive, which I would imagine must take a while. The process has been going for about 18 hours now. I can use other functions of operating system like the web browser and I can even use another terminal window, so I know the system is not frozen. Should I restart the process or let it continue on? Any advice will help. Thanks. By the way, I already noticed a post similar to this that was previously answered though the user was not using the same command as I was.

    Read the article

  • SQLAuthority News Were sorry but your computer or network may be sending automated queries. To pro

    I use multiple browser many times when I am working with multiple projects simultaneously. Often I use Google Reader to read few feeds. Recently, I faced the following error and this error will not go. I even restarted my computer and rebooted my network. I am confident that my computer does not have viruses or [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • HTTP resource bundling/streaming practice

    - by icelava
    Our SPA (plain HTML and Javascript) makes use of huge volume of javascript and other resources that are downloaded via XHR. Given the sheer number of components and browser simultaneous request limits, we're thinking for ways to deliver our resources in a more efficient manner. A method we're considering is bundling several resources that logically form a coherent group into a single file; thus reducing down to only one XHR (per group). Furthermore to make it more responsive, we'd like to constantly inspect the partial responseText during the LOADING state, determining if a usable chunk (atomic resource) has already been downloaded, and make it available for deserialization/processing even before the XHR is DONE. (a stream-like experience) We're thinking surely somebody else would've considered roughly the same approach before, but haven't really come across any library/framework or container file format that is suitable for our scenario. Anybody else know of something similar?

    Read the article

  • Multiple vulnerabilities in libexif

    - by Umang_D
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-2812 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 6.4 libexif Solaris 11 11/11 SRU 12.4 CVE-2012-2813 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 6.4 CVE-2012-2814 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 7.5 CVE-2012-2836 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 6.4 CVE-2012-2837 Numeric Errors vulnerability 5.0 CVE-2012-2840 Numeric Errors vulnerability 7.5 CVE-2012-2841 Numeric Errors vulnerability 7.5 CVE-2012-2845 Numeric Errors vulnerability 6.4 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • Feasible to send marketing emails as an image?

    - by Anonymous -
    Is it feasible to send marketing emails entirely as images - apart from a link at the top, giving the option for the recipient to view the email online (in their browser) and one at the footer to unsubscribe from our mailing list? Anyone who's coded a html email template before knows how much of a pain it is to end up with the final design that displays 'properly' (rarely does it display the same in all clients) and doesn't break. I understand there's the possibility of people simply ignoring the email altogether should their email clients be set not to automatically download images, but many of our email primarily feature images anyway. Thoughts?

    Read the article

  • Downsides of using Lubuntu on a good computer [closed]

    - by Yamitatsu
    I have a simple question but yet, hard to find any anwser on it. Are there any downsides of using Lubuntu on a good laptop ? Ihe one I purchased would run Ubuntu really good, but I like the look & feel of Lubuntu. I mainly use it to code or watch movie / listen music, usually a load of applications opened at the same time, 20+ tabs on multiple web browser, etc.. Since Lubuntu is lightweight, i wonder if it lacks of some useful functions or something like that.

    Read the article

  • Start of Career: with Java or PHP [closed]

    - by Anusha
    I am very new to this programming career. I am now doing job on PHP & MySQL (joined before 6 months) and working on e-commerce project. Simultaneously I am learning JAVA also just completed Adv. Java. I can code on both. I am good at SQL, Oracle and MySQL also. My Question is Where do I set my career on PHP or JAVA? Which has more scope and future security? Or also is there any job profile which includes both, if yes then is that good to work on both? I am confused a lot on this, please help me..

    Read the article

  • JavaScript: scroll position (Webkit engine) [migrated]

    - by Julien
    I'm currently trying to use JavaScript to find out how far down the page the user has scrolled; for Firefox 8.0, the keyword is pageYOffset. To say things mechanically: The page has a certain height. In Firefox, the useful object is document.documentElement.scrollHeight. The browser's visible area also has a certain height. In Firefox, the object is window.innerHeight; in IE8, document.documentElement.clientHeight. I need to know where the user is in the page vertically; in other words, how many pixels down the page the user has scrolled. Does Webkit have a DOM object that refers to the current scroll position? Thank you.

    Read the article

  • Is it practically useful to decline GUI for a newbie in Ubuntu?

    - by Kifsif
    My Ubuntu is 12.04. I have just started learning Linux and Ubuntu in particular. To remember commands quicker, I'd like to decline GUI. But there are some problems. I don't know where installed programs are to launch them. For example, I have a pdf file. I know that there is a program to view such files. Should it be the case of GUI, I would just click on the pdf-file, and have a look that I use Document Viewer 3.4.0. Then I would like to launch Firefox Web Browser. Even if I know it is installed, how to find the file to be launched using just CLI is a mystery to me. Could you suggest me anything.

    Read the article

  • Getting started with SOAP [closed]

    - by EmmyS
    A site I developed has a new requirement to get weather data from the National Weather Service. They have quite a bit of info on how to use SOAP to get their data and display it in the browser, but what we need to do is use a cron job to get the data at specific intervals, then parse the data out into a database. I have no problem writing PHP code that will run an XSLt and parse xml records out into SQL queries, but I have no idea how to handle this with SOAP (which I've never worked with.) Do I get the data via a SOAP request, save it to an XML file on my web server, then run the XSLt against that? Or is there some other way to go about this?

    Read the article

  • Is this a valid HTTP response? [migrated]

    - by fatmck
    I am writing a web server using C++, which responds the following for all requests: static std::string rsp[] = { "HTTP/1.1 200 OK\r\n", "Server: WebServer\r\n", "Content-Type: text/html\r\n", "Content-Length: 3\r\n", "Connection: close\r\n", "\r\n", "123" }; the content "123" can be successfully shown in browser. But when I use apache-ab to do a test, ab always show errors like this: ab -n 1 -c 1 http://127.0.0.1:1080/ apr_socket_recv: Connection reset by peer (104) I thought that I'm closing the socket too quickly, so I commented the close() function. But ab just hold, ab seems to be waiting for a complete response.

    Read the article

  • Serverless Web Application

    - by Andrea Di Persio
    In my company we work on a software that produce reports in html format. My bosses love the fact that static html pages can be moved across computer simply by moving/copying a folder and no web server is involved, so the customer only need a browser. The problem is that they asking me to implement a lot of feature which is very hard to implement properly and in a clean way without an application server. Frames cross domain problem, the impossibility to work with GET and POST data, no URLs routing...is very hard to work with this limitations. Anyone had similiar experience and wants to share their tricks/suggestion ? Do I need to tell my boss 'there is no future without a web server'? Regards.

    Read the article

  • Ubuntu 12.04 on Vm Player showing Wired Network Instead of Wireless Network

    - by Fak365
    I am new to Ubuntu, recently I installed ubuntu 12.04 in Vm Player (Virtual Machine) on my Dell laptop having windows 7 ultimate 32 bit for just to check the security of my wireless network and want to crack the WiFi (WPA-PSK) password but in ubuntu it does not show the wireless network it shows the 2 arrow sign as i have not connect the ethernet cable to my laptop and connected through WiFi on my main OS (windows 7) but it shows the wired network and internet is working but it does not show wifi connection. On windows 7 WiFi is connected and showing the WiFi connection and working correctly.But my main motive is to crack the WiFi password as it can't detect WiFi network so what to do? Please Help.!!thanks My Laptop Specification : Laptop : Dell Latitude D620 OS : Windows 7 Ultimate 32 bit Processor : Core Duo 2 T7200 @ 2Ghz Ram : 2 GB WiFi card : Intel Pro/Wireless 3945 ABG Virtual Machine : Vm Player V 5.0.1 If Need to Install Drivers Please Give Me Full Information how to install and which driver I should install. Thanks In Advance.

    Read the article

  • Xubuntu and other Debian based distros slow

    - by William V
    I have a Compaq Presario SR1950NX desktop computer with the AMD64 3800+ processor and 1GB ram and it seems that Ubunutu, Xubuntu and Lubuntu are all laggy. Things seem to be slow such as clicking on menus and opening programs and the UI renders in peices. When using the browser the system slows down considerably. I ran the TOP command and I do notice that xorg hits 30 to 40 percent cpu when running the browsers. I have tried these distros on a spare P4 machine and it is even worse. As long as I don't have a several things open at one time I can manage to get around although sluggishly. I also notice that I can't get debian based distros to install in 64bit (crtc6 failure) only in 32bit. Can anyone tell me what is it that I might be doing wrong? I have an integrated Nvidia card and have tried several of the recommended drivers which sometimes result in no boot screen upon reboot. Thanks

    Read the article

  • Should I keep separate client codebases and databases for a software-as-a-service application?

    - by John
    My question is about the architecture of my application. I have a Rails application where companies can administrate all things related to their clients. Companies would buy a subscription and their users can access the application online. Hopefully I will get multiple companies subscribing to my application/service. What should I do with my code and database? Seperate app code base and database per company One app code base but seperate database per company One app code base and one database The decision involves security (e.g. a user from company X should not see any data from company Y) performance (let's suppose it becomes successful, it should have a good performance) and scalability (again, if successful, it should have a good performance but also easy for me to handle all the companies, code changes, etc). For the sake of maintainability, I tend to opt for the one code base, but for the database I really don't know. What do you think is the best option?

    Read the article

  • The architecture and technologies to use for a secure, fast, reliable and easily scalable web application

    - by DSoul
    ^ For actual questions, skip to the lists down below I understand, that his is a vague topic, but please, before you turn the other way and disregard me, hear me out. I am currently doing research for a web application(I don't know if application is the correct word for it, but I will proceed w/ that for now), that one day might need to be everything mentioned in the title. I am bound by nothing. That means that every language, OS and framework is acceptable, but only if it proves it's usefulness. And if you are going to say, that scalability and speed depend on the code I write for this application, then I agree, but I am just trying to find something, that wouldn't stand in my way later on. I have done quite a bit reading on this subject, but I still don't have a clear picture, to what suits my needs, so I come to you, StackOverflow, to give me directions. I know you all must be wondering what I'm building, but I assure you, that it doesn't matter. I have heard of 12 factor app though, if you have any similar guidelines or what is, to suggest the please, go ahead. For the sake of keeping your answers as open as possible, I'm not gonna provide you my experience regarding anything written in this question. ^ Skippers, start here First off - the weights of the requirements are probably something like that (on a scale of 10): Security - 10 Speed - 5 Reliability (concurrency) - 7.5 Scalability - 10 Speed and concurrency are not a top priority, in the sense, that the program can be CPU intensive, and therefore slow, and only accept a not-that-high number of concurrent users, but both of these factors must be improvable by scaling the system Anyway, here are my questions: How many layers should the application have, so it would be future-proof and could best fulfill the aforementioned requirements? For now, what I have in mind is the most common version: Completely separated front end, that might be a web page or an MMI application or even both. Some middle-ware handling communication between the front and the back end. This is probably a server that communicates w/ the front end via HTTP. How the communication w/ the back end should be handled is probably dependent on the back end. The back end. Something that handles data through resources like DB and etc. and does various computations w/ the data. This, as the highest priority part of the software, must be easily spread to multiple computers later on and have no known security holes. I think ideally the middle-ware should send a request to a queue from where one of the back end processes takes this request, chops it up to smaller parts and buts these parts of the request back onto the same queue as the initial request, after what these parts will be then handled by other back end processes. Something *map-reduce*y, so to say. What frameworks, languages and etc. should these layers use? The technologies used here are not that important at this moment, you can ignore this part for now I've been pointed to node.js for this part. Do you guys know any better alternatives, or have any reasons why I should (not) use node.js for this particular job. I actually have no good idea, what to use for this job, there are too many options out there, so please direct me. This part (and the 2. one also, I think) depend a lot on the OS, so suggest any OSs alongside w/ the technologies/frameworks. Initially, all computers (or 1 for starters) hosting the back end are going to be virtual machines. Please do give suggestions to any part of the question, that you feel you have comprehensive knowledge and/or experience of. And also, point out if you feel that any part of the current set-up means an instant (or even distant) failure or if I missed a very important aspect to consider. I'm not looking for a definitive answer for how to achieve my goals, because there certainly isn't one, for I haven't provided you w/ all the required information. I'm just looking for recommendations and directions on what to look into. Also, bare in mind, that this isn't something that I have to get done quickly, to sell and let it be re-written by the new owner (which, I've been told for multiple times, is what I should aim for). I have all the time in the world and I really just want to learn doing something really high-end. Also, excuse me if my language isn't the best, I'm not a native. Anyway. Thanks in advance to anyone, who takes the time to help me out here. PS. When I do seem to come up w/ a good architecture/design for this project, I will certainly make it an open project and keep you guys up to date w/ it's development. As in what you could have told me earlier and etc. For obvious reasons the very same question got closed on SO, but could you guys still help me?.

    Read the article

  • "Index of ..." directory's files listing

    - by Tony
    On my courses we've got homework on site in folders such as: http://example.com/files/tasks1-edc34rtgfds http://example.com/files/tasks2-0bg454fgerg http://example.com/files/tasks3-h1dlkjiojo8 ... Each tasksi-xxxxxxxxxxx is a folder with 11 random characters at the end. And when you view the above URLs in browser you can see Index of /tasksi-xxxxxxxxx with all the files in that folder. When you view http://example.com/files/ you can see only empty html with words "Hello, world". The problem is that you can't look into the next task without knowledge of its URL. So for example we've got the URLs for tasks1 and tasks2, and we can't guess what tasks3 URL will be (as we need to know the 11 random characters at the end) How can I get the list of all directories? (Is there a way to type something like http://example.com/files/task1-aflafjal343/..? or another way?) I want to see all upcoming homework tasks.

    Read the article

  • A tale of two viewports &#8212; part one

    Back in November I started complicated research into measuring the widths and heights of variousinteresting elements in mobile browsers. This research kept me occupied for months and months; and frankly I becamea bit afraid of it because the subject is so complicated.Besides, when I re-did some tests in MarchI pretty quickly figured out I’d made some nasty mistakes in my original tests. Back to thedrawing board.However, after a review round by some browser vendors and some rewriting it’s done now.Today...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Hostname on intranet

    - by user7242
    I have a test server that is running Ubuntu Server in a windows network. Networking is configured as follows auto eth0 iface eth0 inet dhcp hostname ca the command cat /etc/hostname returns ca. But when I use the command host 10.49.156.196 (its current IP address) from another machine on the network, it returns as follows: 196.156.49.10.in-addr.arpa domain name pointer owner-pc.xxxxx.xxx I can access the machine via SSH and browser using IP address, but not by name ca Any suggestions? I tried installing samba/nmbd as suggested in another post, but to no avail.

    Read the article

  • Proxy Client for Ubuntu

    - by WindowsEscapist
    I want to use a proxy for web browsing similar to Ultrasurf for Windows. I've tried to use TOR, but it isn't working! The problem is whenever I search something along the lines of "ubuntu + linux proxy", sites assume that I want to set up a proxy server rather than use one. I just want something with little to no configuration needed (i.e. I don't have my own proxy server). UltraSurf is a free software which enables users inside countries with heavy Internet censorship to visit any public web sites in the world safely and freely. Users in countries without internet censorship also use it to protect their internet privacy and security.

    Read the article

  • Multiple vulnerabilities in Pidgin

    - by RitwikGhoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2010-4528 Improper Input Validation vulnerability 4.0 Pidgin Solaris 10 SPARC: 147992-02 X86: 147993-02 CVE-2011-1091 Denial of service(DOS) vulnerability 4.0 CVE-2011-2943 Denial of service(DOS) vulnerability 4.3 CVE-2011-3184 Resource Management Errors vulnerability 4.3 CVE-2011-3185 Improper Input Validation vulnerability 9.3 CVE-2011-4601 Improper Input Validation vulnerability 5.0 CVE-2011-4602 Improper Input Validation vulnerability 5.0 CVE-2011-4603 Improper Input Validation vulnerability 5.0 CVE-2011-4922 Information Exposure vulnerability 2.1 CVE-2011-4939 Permissions, Privileges, and Access Controls vulnerability 6.4 CVE-2012-1178 Resource Management Errors vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • URL subfolder rewrite without server access

    - by Duke03
    I am having trouble with the following. I have a site in development that has every link on the site pointing to the wrong folder. Example: example.com/en/home/, a site link goes to example.com/en/, which throws a 404. Now the way the system is setup requires server access but I do not have that and I/S is backlogged with requests and will take a week. But I still need to develop the site. So is there a way to have the browser recognize when example.com/en/ is clicked then automatically redirect it to example.com/en/home so it bypasses the 404 and I can actually work. Im looking for anything that gets the job done. I am considering developing a Chrome app to do this but that would mean a shit ton of overtime and more work I don't want to do. Is there a easier way of doing this?

    Read the article

  • Oracle Solaris 11.1 Now Available; Learn More About It at November 7th Webcast

    - by Larry Wake
    Oracle Solaris 11.1 is now available for download -- as detailed earlier, this update to Oracle Solaris 11.1 provides new enhancements for enterprise cloud computing. Security, network, and provisioning advances, in addition to significant new performance features, make an already great release even better. For more information, you can't do better than the upcoming launch event webcast, featuring a live Q&A with Solaris engineering experts and three sessions covering what's new with Oracle Solaris 11.1 and Oracle Solaris Cluster. It's on Wednesday, November 7, at 8 AM PT; register today.

    Read the article

  • Mouse running amok

    - by Norene Bult
    While using my mouse, it will all of a sudden take off and might run to calendar or/and trash. It opened calendar, then went down and opened trash up several times. It started out just doing it once in awhile with opening something and I didn't move it there. It did on it's own. Is someone controlling my computer? That's how it acts. I would like to get this straighted out. If it is my mouse just going berserk then I will install a new one. If it is something else, I'd like to know what could be causing it. I have my security set so no one can access my computer. Please can anyone help?

    Read the article

  • When and How is an image cached for an ASPX with ContentType = image/jpeg ?

    - by Aamir Hasan
     In asp.net you can cache your page. You can vary the output cache by the followingThe query string in an initial request (HTTP GET).Control values passed on postback (HTTP POST values).The HTTP headers passed with a request.The major version number of the browser making the request.      A custom string in the page. In that case, you create custom code in the Global.asax file to specify the page's caching behavior.Link: http://msdn2.microsoft.com/en-us/library/xadzbzd6(VS.80).aspxyou can set the output caching for your GetImage.aspx, so that you dont have to requery the database every image request ,but you must use varybyParam , so that you have a cached version for every parameters arrangement:set the output cache for your page like this :At top of ASPX page: <%@ OutputCache Duration="600" VaryByParam="ID,Height,Width" %>VaryByParam  attribute allows you to vary the cached output depending on the query string.Adding this will make your images cached for 600 seconds, so that if the image request within this period ,the cahed version will be returned

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >