Search Results

Search found 30072 results on 1203 pages for 'thorbis website design'.

Page 164/1203 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Websites that introducing new stuff

    - by user33929
    Is there any blog/website that introducing new funny/fancy websites or new tech (software/hardware)? I am talking about the site that may introduce digg.com, superuser.com, mint.com or delicious.com when these site first come out. Also, introducing new handy/useful 3rd party applications/tools. Thanks.

    Read the article

  • Why doesn't my web page load completely every time?

    - by Gayanee Wijayasekara
    I have an intranet website running on IIS 7. When I try to load my site, it reacts differently every time. Here are the following different scenarios that occur when I try to load my site: The site loads right away and is working properly The site loads slowly and some my styling/images/javascript did not appear to load correctly. I receive a "503 service unavailable" error Any ideas why this is happening?

    Read the article

  • Should library classes be wrapped before using them in unit testing?

    - by Songo
    I'm doing unit testing and in one of my classes I need to send a mail from one of the methods, so using constructor injection I inject an instance of Zend_Mail class which is in Zend framework. Example: class Logger{ private $mailer; function __construct(Zend_Mail $mail){ $this->mail=$mail; } function toBeTestedFunction(){ //Some code $this->mail->setTo('some value'); $this->mail->setSubject('some value'); $this->mail->setBody('some value'); $this->mail->send(); //Some } } However, Unit testing demands that I test one component at a time, so I need to mock the Zend_Mail class. In addition I'm violating the Dependency Inversion principle as my Logger class now depends on concretion not abstraction. Does that mean that I can never use a library class directly and must always wrap it in a class of my own? Example: interface Mailer{ public function setTo($to); public function setSubject($subject); public function setBody($body); public function send(); } class MyMailer implements Mailer{ private $mailer; function __construct(){ $this->mail=new Zend_Mail; //The class isn't injected this time } function setTo($to){ $this->mailer->setTo($to); } //implement the rest of the interface functions similarly } And now my Logger class can be happy :D class Logger{ private $mailer; function __construct(Mailer $mail){ $this->mail=$mail; } //rest of the code unchanged } Questions: Although I solved the mocking problem by introducing an interface, I have created a totally new class Mailer that now needs to be unit tested although it only wraps Zend_Mail which is already unit tested by the Zend team. Is there a better approach to all this? Zend_Mail's send() function could actually have a Zend_Transport object when called (i.e. public function send($transport = null)). Does this make the idea of a wrapper class more appealing? The code is in PHP, but answers doesn't have to be. This is more of a design issue than a language specific feature

    Read the article

  • Lazy Processing of Streams

    - by Giorgio
    I have the following problem scenario: I have a text file and I have to read it and split it into lines. Some lines might need to be dropped (according to criteria that are not fixed). The lines that are not dropped must be parsed into some predefined records. Records that are not valid must be dropped. Duplicate records may exist and, in such a case, they are consecutive. If duplicate / multiple records exist, only one item should be kept. The remaining records should be grouped according to the value contained in one field; all records belonging to the same group appear one after another (e.g. AAAABBBBCCDEEEFF and so on). The records of each group should be numbered (1, 2, 3, 4, ...). For each group the numbering starts from 1. The records must then be saved somewhere / consumed in the same order as they were produced. I have to implement this in Java or C++. My first idea was to define functions / methods like: One method to get all the lines from the file. One method to filter out the unwanted lines. One method to parse the filtered lines into valid records. One method to remove duplicate records. One method to group records and number them. The problem is that the data I am going to read can be too big and might not fit into main memory: so I cannot just construct all these lists and apply my functions one after the other. On the other hand, I think I do not need to fit all the data in main memory at once because once a record has been consumed all its underlying data (basically the lines of text between the previous record and the current record, and the record itself) can be disposed of. With the little knowledge I have of Haskell I have immediately thought about some kind of lazy evaluation, in which instead of applying functions to lists that have been completely computed, I have different streams of data that are built on top of each other and, at each moment, only the needed portion of each stream is materialized in main memory. But I have to implement this in Java or C++. So my question is which design pattern or other technique can allow me to implement this lazy processing of streams in one of these languages.

    Read the article

  • DVD (vob file) to online video viewer?

    - by Nick
    I've been sent a DVD which needs to be put onto a website, but I honestly don't even know where to start. Do I simply convert the file using some software to MP4(?!) and then use something like http://videojs.com/ to view it online? I'm really sorry for the vague question, but I want to produce the best quality results, with good compression, good quality and a nice video player interface. Would really appreciate any recommendations. Thank you!

    Read the article

  • Storing images in file system and returning URLs or virtually resizing and returning byte arrays?

    - by ismaelf
    I need to create a REST web service to manage user submitted images and displaying them all in a website. There are multiple websites that are going to use this service to manage and display images. The requirements are to have 5 pre-defined image sizes available. The 2 options I see are the following: The web service will create the 5 images, store them in the file system and and store the URL's in the database when the user submits the image. When the image is requested, the web service will return an array of URLs. I see this option to be a little hard on the hard drive. The estimates are 10,000 users per site, and lets say, 100 sites. The heavy processing will be done when the user submits the image and each image is going to be pulled from the File System. The web service will store just the image that the user submits in the file system and it's URL in the database. When the user request images, the web service will get the info from the DB, load the image on memory, create its 5 instances and return an object with 5 image arrays (I will probably cache the arrays). This option is harder on the processor and memory. The heavy processing will be done when the images get requested. A plus I see for option 2 is that it will give me the option to rewrite the URL of the image and make them site dependent (prettier) than having a image repository for all websites. But this is not a big deal. What do you think of these options? Do you have any other suggestions?

    Read the article

  • HTTP PHP Authentication and Android

    - by edc598
    I am working on a website for which I hope to have an application for as well. Because of this, I am creating PHP API's which will go into my Database and serve specific data based on the method/function called. I want to protect these API's from misuse however, and I plan on implementing Authentication Digest to do so. However one of the OS's I want to support is Android. And I know that a malicious user would be able to reverse engineer the Android app and figure out my authentication scheme. I am left wondering: 1. Is there a better way to protect these API's from misuse? 2. Is there a way to prevent a malicious user from reverse engineering the app and potentially seeing the source code for it, enabling them to see my authentication scheme? 3. If none of these are preventable, then is my only option to have a Username/Password cred specifically for the Android app, and when eventually hacked, change the creds and issue an update for the app? I apologize if this is not the place to post such a question. Still pretty new to StackOverflow. Thanks in advance for any insight, it would be quite helpful.

    Read the article

  • The term "interface" in C++

    - by Flexo
    Java makes a clear distinction between class and interface. (I believe C# does also, but I have no experience with it). When writing C++ however there is no language enforced distinction between class and interface. Consequently I've always viewed interface as a workaround for the lack of multiple inheritance in Java. Making such a distinction feels arbitrary and meaningless in C++. I've always tended to go with the "write things in the most obvious way" approach, so if in C++ I've got what might be called an interface in Java, e.g.: class Foo { public: virtual void doStuff() = 0; ~Foo() = 0; }; and I then decided that most implementers of Foo wanted to share some common functionality I would probably write: class Foo { public: virtual void doStuff() = 0; ~Foo() {} protected: // If it needs this to do its thing: int internalHelperThing(int); // Or if it doesn't need the this pointer: static int someOtherHelper(int); }; Which then makes this not an interface in the Java sense anymore. Instead C++ has two important concepts, related to the same underlying inheritance problem: virtual inhertiance Classes with no member variables can occupy no extra space when used as a base "Base class subobjects may have zero size" Reference Of those I try to avoid #1 wherever possible - it's rare to encounter a scenario where that genuinely is the "cleanest" design. #2 is however a subtle, but important difference between my understanding of the term "interface" and the C++ language features. As a result of this I currently (almost) never refer to things as "interfaces" in C++ and talk in terms of base classes and their sizes. I would say that in the context of C++ "interface" is a misnomer. It has come to my attention though that not many people make such a distinction. Do I stand to lose anything by allowing (e.g. protected) non-virtual functions to exist within an "interface" in C++? (My feeling is the exactly the opposite - a more natural location for shared code) Is the term "interface" meaningful in C++ - does it imply only pure virtual or would it be fair to call C++ classes with no member variables an interface still?

    Read the article

  • Development Approach: User Interface In or Domain Model Out?

    - by Berin Loritsch
    While I've never delivered anything using Smalltalk, my brief time playing with it has definitely left its mark. The only way to describe the experience is MVC the way it was meant to be. Essentially, all the heavy lifting for your application is done in the business objects (or domain model if you are so inclined). The standard controls are bound to the business objects in some way. For example, a text box is mapped to an object's field (the field itself is an object so it's easy to do). A button would mapped to a method. This is all done with a very simple and natural API. We don't have to think about binding objects, etc. It just works. Yet, in many newer languages and APIs you are forced to think from the outside in. First with C++ and MFC, and now with C# and WPF, Microsoft has gotten it's developer world hooked on GUI builders where you build your application by implementing event handlers. Java Swing development isn't so different, only you are writing the code to instantiate the controls on the form yourself. For some projects, there may never even be a domain model--just event handlers. I've been in and around this model for most of my carreer. Each way forces you to think differently. With the Smalltalk approach, your domain is smart while your GUI is dumb. With the default VisualStudio approach, your GUI is smart while your domain model (if it exists) is rather anemic. Many developers that I work with see value in the Smalltalk approach, and try to shoehorn that approach into the VisualStudio environment. WPF has some dynamic binding features that makes it possible; but there are limitations. Inevitably some code that belongs in the domain model ends up in the GUI classes. So, which way do you design/develop your code? Why? GUI first. User interaction is paramount. Domain first. I need to make sure the system is correct before we put a UI on it. There's pros and cons for either approach. Domain model fits in there with crystal cathedrals and pie in the sky. GUI fits in there with quick and dirty (sometimes really dirty). And for an added bonus: How do you make sure the code is maintainable?

    Read the article

  • Application workflow

    - by manseuk
    I am in the planning process for a new application, the application will be written in PHP (using the Symfony 2 framework) but I'm not sure how relevant that is. The application will be browser based, although there will eventually be API access for other systems to interact with the data stored within the application, again probably not relavent at this point. The application manages SIM cards for lots of different providers - each SIM card belongs to a single provider but a single customer might have many SIM cards across many providers. The application allows the user to perform actions against the SIM card - for example Activate it, Barr it, Check on its status etc Some of the providers provide an API for doing this - so a single access point with multiple methods eg activateSIM, getStatus, barrSIM etc. The method names differ for each provider and some providers offer methods for extra functions that others don't. Some providers don't have APIs but do offer these methods by sending emails with attachments - the attachments are normally a CSV file that contains the SIM reference and action required - the email is processed by the provider and replied to once the action has been complete. To give you an example - the front end of my application will provide a customer with a list of SIM cards they own and give them access to the actions that are provided by the provider of each specific SIM card - some methods may require extra data which will either be stored in the backend or collected from the user frontend. Once the user has selected their action and added any required data I will handle the process in the backend and provide either instant feedback, in the case of the providers with APIs, or start the process off by sending an email and waiting for its reply before processing it and updating the backend so that next time the user checks the SIM card its status is correct (ie updated by a backend process). My reason for creating this question is because I'm stuck !! I'm confused about how to approach the actual workflow logic. I was thinking about creating a Provider Interface with the most common methods getStatus, activateSIM and barrSIM and then implementing that interface for each provider. So class Provider1 implements Provider - Then use a Factory to create the required class depending on user selected SIM card and invoking the method selected. This would work fine if all providers offered the same methods but they don't - there are a subset which are common but some providers offer extra methods - how can I implement that flexibly ? How can I deal with the processes where the workflow is different - ie some methods require and API call and value returned and some require an email to be sent and the next stage of the process doesn't start until the email reply is recieved ... Please help ! (I hope this is a readable question and that this is the correct place to be asking) Update I guess what I'm trying to avoid is a big if or switch / case statement - some design pattern that gives me a flexible approach to implementing this kind of fluid workflow .. anyone ?

    Read the article

  • What is the right way to group this project into classes?

    - by sigil
    I originally asked this on SO, where it was closed and recommended that I ask it here instead. I'm trying to figure out how to group all the functions necessary for my project into classes. The goal of the project is to execute the following process: Get the user's FTP credentials (username & password). Check to make sure the credentials establish a valid connection to the FTP server. Query several Sharepoint lists and join the results of those queries to create a list of items that need to have action taken on them. Each item in the list has a folder. For each item: Zip the contents of the folder. Upload the folder to the FTP server using SFTP Update the item's Sharepoint data. Email the user an Excel report showing, e.g., Items without folder paths Items that failed to zip or upload Steps 2-5 are performed on a periodic basis; if step 2 returns an invalid connection, the user is alerted and the process returns to step 1. If at any point the user presses a certain key, the process terminates. I've defined the following set of classes, each of which is in its own .cs file: SFTP: file transfer processes DataHandler: Sharepoint data retrieval/querying/updating processes. Also makes and uploads the zip files. Exceptions: Not just one class, this is the .cs file where I have all of my exception classes. Report: Builds and sends the report. Program: The main class for running the program. I recognize that the DataHandler class is a god object, but I don't have a good idea of how to refactor it. I feel like it should be more fine-grained than just breaking it into Sharepoint, Zip, and Upload, but maybe that's it. Also, I haven't yet worked out how to combine the periodic behavior with the "wait for user input at any point in the process" part; I think that involves threads, which means other classes to manage the threads... I'm not that well-versed in design patterns, but is there one that fits this project well? If this is too big of a topic to neatly explain in an SO answer, I'll also accept a link to a good tutorial on what I'm trying to do here.

    Read the article

  • Is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    - by Tum
    This question is about good design practice in programming. Let see this example, we have 2 interrelated tables: Table1 textID - text 1 - love.. 2 - men... ... Table2 rID - textID 1 - 1 2 - 2 ... Note: In Table1: textID is auto_increment primary key In Table2: rID is auto_increment primary key & textID is foreign key The relationship is that 1 rID will have 1 and only 1 textID but 1 textID can have a few rID. So, when table1 got modification then table2 should be updated accordingly. Ok, here is a fictitious example. You build a very complicated system. When you modify 1 record in table1, you need to keep track of the related record in table2. To keep track, you can do like this: Option 1: When you modify a record in table1, you will try to modify a related record in table 2. This could be quite hard in term of programming expecially for a very very complicated system. Option 2: instead of modifying a related record in table2, you decided to delete old record in table 2 & insert new one. This is easier for you to program. For example, suppose you are using option2, then when you modify record 1,2,3,....,100 in table1, the table2 will look like this: Table2 rID - textID 101 - 1 102 - 2 ... 200 - 100 This means the Max of auto_increment IDs in table1 is still the same (100) but the Max of auto_increment IDs in table2 already reached 200. what if the user modify many times? if they do then the table2 may run out of records? we can use BigInt but that make the app run slower? Note: If you spend time to program to modify records in table2 when table1 got modified then it will be very hard & thus it will be error prone. But if you just clear the old record & insert new records into table2 then it is much easy to program & thus your program is simpler & less error prone. So, is it good practice to keep 2 related tables (using auto_increment PK) to have the same Max of auto_increment ID when table1 got modified?

    Read the article

  • Handshake violation when trying to access one website

    - by Miguel
    I have a TZ 190 Wireless Enhanced with SonicOS Enhanced 4.2.1.0-20e. Yesterday, people could access without any problems a bank website wich uses HTTPS. Today, it is imposible to access only that website, every other ones works without problems. When checking the log message filtering to my IP only, this is what appears and I suspect is the cause of this problem, because all other websites are working: Priority: Notice Category: Network Access Message: TCP handshake violation detected; TCP connection dropped Source: X.Y.Z.3, 51997, LAN (admin) Destination: 200.14.232.18, 443, WAN Notes: Handshake Timeout Where X.Y.Z.3 is my local IP. I've tried to change TCP Settings under Firewall option, and activated this options with no success: Enforce strict TCP compliance with RFC 793 and RFC 1122 and Enable TCP checksum enforcement I've also tried to find the MTU and at first I got: Packet needs to be fragmented but DF set But when I lower the value of ping -f -l to 1468 I got: Request timeout. Also I deactivate CFS in lan and wan zones. Nothing works. Can you please help me? Any Ideas?

    Read the article

  • Kerberos & signle-sign-on for website

    - by Dylan Klomparens
    I have a website running on a Linux computer using Apache. I've employed mod_auth_kerb for single-sign-on Kerberos authentication against a Windows Active Directory server. In order for Kerberos to work correctly, I've created a service account in Active Directory called dummy. I've generated a keytab for the Linux web server using ktpass.exe on the Windows AD server using this command: ktpass /out C:\krb5.keytab /princ HTTP/[email protected] /mapuser [email protected] /crypto RC4-HMAC-NT /ptype KRB5_NT_PRINCIPAL /pass xxxxxxxxx I can successfully get a ticket from the Linux web server using this command: kinit -k -t /path/to/keytab HTTP/[email protected] ... and view the ticket with klist. I have also configured my web server with these Kerberos properties: <Directory /> AuthType Kerberos AuthName "Example.com Kerberos domain" KrbMethodK5Passwd Off KrbAuthRealms EXAMPLE.COM KrbServiceName HTTP/[email protected] Krb5KeyTab /path/to/keytab Require valid-user SSLRequireSSL <Files wsgi.py> Order deny,allow Allow from all </Files> </Directory> However, when I attempt to log in to the website (from another Desktop with username 'Jeff') my Kerberos credentials are not automatically accepted by the web server. It should grant me access immediately after that, but it does not. The only information I get from the mod_auth_kerb logs is: kerb_authenticate_user entered with user (NULL) and auth_type Kerberos However, more information is revealed when I change the mod_auth_kerb setting KrbMethodK5Passwd to On: [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1939): [client xxx.xxx.xxx.xxx] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1031): [client xxx.xxx.xxx.xxx] Using HTTP/[email protected] as server principal for password verification [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(735): [client xxx.xxx.xxx.xxx] Trying to get TGT for user [email protected] [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(645): [client xxx.xxx.xxx.xxx] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1110): [client xxx.xxx.xxx.xxx] kerb_authenticate_user_krb5pwd ret=0 [email protected] authtype=Basic What am I missing? I've studied a lot of online tutorials and cannot find a reason why the Kerberos credentials are not allowing access.

    Read the article

  • my Website loss packet in 70% countries, how can i dertermine why its loss packets?

    - by user2511667
    I checked my website on google page speed tester, it show result 90/100. I checked my website on pingdom it shows good result there. When i check my website in cloudmonitor.ca.com, it shows good result in 30% countries and all other countries it show packet loss (100%) How we can determine why my website has packet loss? And what is its solution? Is this problem from my server or from my website? I created new html blank page and set it too my index page, after I tested, it still shows packet loss, guess this means the problem is not in my website. Here is live result When I visit my website in browser, website is working fine. But when i test my domain or IP 198.178.123.219 in command Prompt it shows "Request time out" Why time out in command prompt?

    Read the article

  • Difference between accessing a website using Local host and IP address

    - by Cdeez
    I have developed an ASP.NET website and deployed into my IIS server. Now to see that my IIS is installed fine, I type local host in my address bar, and I get the welcome screen of IIS and its documentation in a separate window. Now I gave the url of my website http://localhost/mysites/site2/Default.aspx I access my site. Also giving my IP address instead of local host like: http://192.168.1.46/mysites/site2/Default.aspx also works. Just out of curiosity I wanted to see what happens when I give my IP address in addressbar. It asks me a user name and password saying:The server 192.168.1.46:80 requires a user name and password. I donot know what user name and password it is asking, and as of my knowledge I thought localhost points to my own IP address internally. But what is the difference and also what username and password do I need for it? Update: On chrome and IE just giving localhost displays the welcome screen, but on mozilla, localhost is also asking for a username and password.

    Read the article

  • Looking for a short term solution to improve website performance with additional server

    - by Tanim Mirza
    I am working with a small team to run an internal website running with PHP 5.3.9, MySQL 5.0.77. All the files and database are hosted on a dedicated Linux machine with the following configuration: Intel Xeon E5450 8 CPU cores @3.00GHz, 2992.498 MHz, Cache 6148 KB, Cent OS – Red Hat Enterprise Linux Server release 5.4 We started small and then the database got bigger and now the website performance degraded significantly. We often get server space overrun, mysql overloaded with too many calls, etc. We don't have much experience dealing with these issues. We recently got another server that we were thinking to use to improve performance. Since it has better configuration, some of us wanted to completely move everything to the new machine. But I am trying to find out how we can utilize both machine for optimized performance. I found options such as MySQL clustering, Load balancer, etc. I was wondering if I could get any suggestion for this situation "How to utilize two machines in short term for best performance", that would be great. By short term we are looking for something that we can deploy in a month or so. Thanks in advance for your time.

    Read the article

  • Finding ALL currently used IP addresses of Website

    - by Patrick R
    What steps would you take to discover all (or close to all) IP addresses that are currently used by a website? How would you be as exhaustive as possible without calling a website admin and asking for the list of IP addresses? ;) nslookup works but will vary based on dns server queried. whois is another good tool. Dig, not bad. Let's use Facebook for example. I'm blocking that site for the majority our our company's users, but some are approved for "research". I can not easily use OpenDNS because we all appear to come from the same request IP address. I could change that but don't want to add more vlans than I already have. I also could use block something like regex facebook1 "facebook\.com" (I'm running a cisco firewall) but that's pretty easy to sidestep. All that being said, I'm asking about specifically about finding ip addresses for a domain and not for other methods that I can block a domain name.

    Read the article

  • Website hosting from home - IIS6

    - by Paul
    I'm wanting to host a few websites from home, primarily because I'm using some BETA Microsoft software (.NET 4 and EF) and don't want to install it on my production server which is hosted at eukhost.com. Basically, I'm completely new to this sort of thing. So far, here is what I've done: Registered the domain name at namecheap.com (let's call it mydomain.com) Gone to "Nameserver Registration" in the panel and entered my IP address for the NS1 and NS2 records (let's say the IP is 0.0.0.0). Gone to "Domain Name Server Setup" and entered ns1.mydomain.com & ns2.mydomain.com Forwarded requests from port 80 to my internal IP (let's say 192.168.1.254) Created the website in IIS (I'm just testing with a single website so far, so have not created any host header values) Now, if I type in the IP address (http://0.0.0.0) I get the site as expected. However, if I enter http://www.mydomain.com I get an error saying "DNS Error - Cannot find server". I'm aware that there is a service from DynDNS that will automatically change the IP if I have a dynamic address, however my IP has remained static since I installed the ISP (since October) so I don't need this. Is there any way that I can get the DNS to work just by configuring IIS or something in Windows? I don't really want to have to pay for any 3rd party service. Thanks,

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • How to recover Google classic design from its new design?

    - by Steven
    I typed this into my address bar: javascript:void(document.cookie=”PREF=ID=20b6e4c2f44943bb:U=4bf292d46faad806:TM=1249677602:LM=1257919388:S=odm0Ys-53ZueXfZG;path=/; domain=.google.com”); However, I don't like the new design of Google. How to switch back? How to cancel this effect using Javascript? How to reverse by using Javascript?

    Read the article

  • Data Center Design and Preferences

    - by Warner
    When either selecting a data center as a co-location facility or designing a new one from scratch, what would your ideal specification be? Fundamentally, diversified power sources, multiple ISPs, redundant generators, UPS, cooling, and physical security are all desireable. What are the additional key requirements that someone might not consider on the first pass? What are the functional details someone might not consider during the initial high level design?

    Read the article

  • How to download video from a website that uses flash player but

    - by TPR
    Possible Duplicate: Download Flash video file from any video site? Livestream.com seems to be using flash player to show both live streams and archived/recorded streams (meaning previously shown streams). I want to download the archived streams. I am assuming that it should be much easier to download archived video from the website compared to the live stream. Here is a sample video: http://www.livestream.com/copanamericana/video?clipId=pla_6f9f4d97-e48f-4b04-bcaa-18e281341b0f&utm_source=lslibrary&utm_medium=ui-thumb ^^ I am not interested in this particular video, just an example. Firefox plugins like DownloadHelper and all do not work. Any suggestions? If I look at the browsing cache, no matter what the website plays, all files have the same size! If I open them, of course no video gets played. So something clever/funny is going on with the flash player on livestream.com (yes, even the archives videos), so it is definitely not the same as downloading videos from youtube. However, ads played on livestream.com videos are properly stored in browser cache.

    Read the article

  • sporadic routing to another website when opening a common url

    - by user226098
    I have a strange problem in our office: Sometimes when opening a url from one of our projects random url in any browser not the right website shows up but some other website. In most of the cases it redirects to google.com with some parameters like https://www.google.de/?gfe_rd=cr&ei=krOOU8_kGcSKswadyYDQBw&gws_rd=ssl or just the ugly google 404 page). But today it remains on the origial url but shows up the the content of http://debug.netdna-cdn.com/. This happens about 1 time a week and for no apparent reason. Even stranger it only occurs on a single pc in the network. It now happens on two different computers in the network. Both use windows 8. The problem cannot be fixed by clearing the browser cache but by rebooting the pc or using ipconfig /flushdns. So I think it has something to do with the dns cache of the machine. But I have no idea what the reason is for this and how i can figure out how to solve it. Any ideas?

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >