Search Results

Search found 13725 results on 549 pages for 'browser fingerprinting'.

Page 357/549 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • Deal Registration is moving to OPS– Guest Post

    - by Kristin Rose
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} We have listened to, contemplated around, and agreed with your feedback when it comes to our current deal registration system, and now we are proud to announce that deal registration is moving to the Oracle Partner Store! If you missed the live announcement at Oracle OpenWorld, watch below as Titina Ott, VP of  Worldwide Alliances & Channels, presents on this upcoming and exciting functionality. Some benefits of this move include: Simplified Registration Form Easier and Faster Product Selection Expanded Browser Support Shared Registration Visibility Between VAD and VAR Downloadable tracking and reporting Shared Customer Selection From Partner Ordering Functions As you may already be aware, the Oracle Partner Store is a very popular, feature rich application for partners like you, that handles software and hardware ordering, including configurations, additional discount requests and product and price information. This big move is set to go live November 19th 2012, but don’t wait until then! If you don’t already have an Oracle Partner Store account, register today  and get ready for the big move! Best Regards, Simon Davis Senior Director Ww A&C Quote To Order

    Read the article

  • Speaking at Sinergija12

    - by DigiMortal
    Next week I will be speaker at Sinergija12, the biggest Microsoft conference held in Serbia. The first time I visited Sinergija it was clear to me that this is the event where I should go back. Why? Because technical level of sessions was very well in place and actually sessions I visited were pretty hardcore. Now, two years later, I will be back there but this time I’m there as speaker. My session at Sinergija12 Here are my three almost finished sessions for Sinergija12. ASP.NET MVC 4 Overview Session focuses on new features of ASP.NET MVC 4 and gives the audience good overview about what is coming. Demos cover all important new features - agent based output, new application templates, Web API and Single Page Applications. This session is for everybody who plans to move to ASP.NET MVC 4 or who plans to start building modern web sites.   Building SharePoint Online applications using Napa Office 365 Next version of Office365 allows you to build SharePoint applications using browser based IDE hosted in cloud. This session introduces new tools and shows through practical examples how to build online applications for SharePoint 2013.   Cloud-enabling ASP.NET MVC applications Cloud era is here and over next years more and more web applications will be hosted on cloud environments. Also some of our current web applications will be moved to cloud. This session shows to audience how to change the architecture of ASP.NET web application so it runs on shared hosting and Windows Azure with same code base. Also the audience will see how to debug and deploy web applications to Windows Azure. All developers who are coming to Sinergija12 are welcome to my sessions. See you there! :)

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • Twitter "Authentication Error" Turpial & Choqok (latest versions)

    - by PineMarten
    I use Turpial a lot, but Turpial isn't connecting at all. I can still connect to Twitter thru the OS app (no issues signing in through Online Accounts) and of course I can still sign in using the browser, but Turpial gives me an "Authentication Error" and Choqok fails to do anything. I've tried changing my password, revoking the Turpial and Ubuntu apps in Twitter and re-enabling them, but then it gives me an "Invalid Credentials" message. I've even removed and installed Turpial multiple times, still nothing. I can't find any information or resources for this type of error from Turpial online. I think it may be something recent after finding this message elsewhere: (article related to "Birdie") It looks promising i'm currently using it atm, since all the other twitter clients no longer work due to the API 1.0 shutdown (posted today) I've never used Choqok before today, so I don't even know if I've set it up properly. It's failing to retrieve or send Tweets it just blank screens, but at least it signs in. I've figured that this isn't an issue with Ubuntu, or Turpial or Choqok, or the router (already replaced it today), so I don't really know what I'm dealing with here. I hope it's not another API issue, Facebook did something similar just a few weeks ago

    Read the article

  • Need guidelines for studying Game Development

    - by ShutterBug
    Hello Everyone, I've completed my graduation in Computer Science and currently working as a Software Engineer in a software company. I was wondering if I can build my career in Game Development. If so, what should be my approach. I've a few questions: Which universities to apply for masters? Preferably in Canada. Scholarships available? How shall I prepare myself before applying which shall give me an edge or advantage over others? I know Java, C#, PHP etc. I dont think these languages will be needed in Game Development. In that case, what languages shall I focus on from now? How do I get some ideas about IDE/Engines/Platform of game development? I'm not talking about flash/browser games. Please suggest me anything you want as I don't know much about it so I'm most likely to miss the most important questions. Feel free to make this thread a starter guide for those interested in perusing their career in game development. Post every relevant information. Thanks in Advance. EDIT: I can see a lot of people suggested to build a small project/game. If so, please suggest me how do I start a small game developing (maybe a clone to some existing small games ie pacman, brick game etc) from start to end.

    Read the article

  • DVD wont mount Ubuntu 12.04

    - by CyborgGold
    I can't seem to be able to mount my optical drive. I have tried numerous solutions from this site with no results. I am not able to see the device inside the file browser either. There is a DVD in the drive. I am running 12.04 on an HP g60-235dx portable. I have a link below to the specs. I will also list what I have tried (that I can find back right now.) I know the drive is functioning, because just before Windows 7 crashed and my MBR went fubar I was watching movies just fine. I am fairly new to linux, so don't assume I know anything. Ok, so here is what I have tried: sudo wget --output-document=/etc/apt/sources.list.d/medibuntu.list http://www.medibuntu.org/sources.list.d/$(lsb_release -cs).list sudo apt-get --quiet update sudo apt-get --yes --quiet --allow-unauthenticated install medibuntu-keyring sudo apt-get --quiet update sudo apt-get install libdvdcss2 dmesg | grep sr0 (no output) apt-get install libdvdnav4 (already installed, and up to date) sudo /usr/share/doc/libdvdread4/install-css.sh ls -l /dev/cdrom /dev/cdrw /dev/dvd /dev/dvdrw /dev/scd0 /dev/sr0 ls: cannot access /dev/scd0: No such file or directory lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/cdrom -> sr0 lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/cdrw -> sr0 lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/dvd -> sr0 lrwxrwxrwx 1 root root 3 Sep 10 03:51 /dev/dvdrw -> sr0 brw-rw----+ 1 root cdrom 11, 0 Sep 10 03:51 /dev/sr0 wodim --devices wodim: Overview of accessible drives (1 found) : ------------------------------------------------------------------------- 0 dev='/dev/sg1' rwrw-- : 'TSSTcorp' 'CDDVDW TS-L633M' ------------------------------------------------------------------------- sudo lshw optical *-cdrom description: DVD-RAM writer product: CDDVDW TS-L633M vendor: TSSTcorp physical id: 1 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/sr0 version: 0200 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc sudo lshw | grep cdrom *-cdrom logical name: /dev/cdrom Spec sheet for portable: http://www.cnet.com/laptops/hp-g60-235dx/4507-3121_7-33496192.html If you need any more information than all of that... please let me know.

    Read the article

  • strange 404 errors

    - by user1400532
    i have this website thinkmovie.in recently i enabled cloudfare along with maxcdn. When i look at my server logs, i see these strange 404 errors for many of the files. for eg: http://thinkmovie.in/img/content/15062012faith/thumbs/model_fai12e8th_latest_photoshoot_10.jpg But the actual url is http://thinkmovie.in/img/content/15062012faith/thumbs/model_faith_latest_photoshoot_10.jpg refer_url: http://www.thinkmovie.in/gallery/ It means the term "model_faith" is replaced by "model_fai12e8th" and one more http://thinkmovie.in/image.php/?offset=1&height=120&width=144&cropratio=1.2:1%E2%84%91=/img/content/07052012pranitha/pranitha_hot_in_saguni_movie_press_meet_0.jpg?offset=1&height=120&width=144&cropratio=1.2:1%E2%84%91=/img/content/07052012pranitha/pranitha_hot_in_saguni_movie_press_meet_0.jpg actual url http://thinkmovie.finalytics.in/image.php/?offset=1&height=120&width=144&cropratio=1.2:1%E2%84%91=/img/content/07052012pranitha/pranitha_hot_in_saguni_movie_press_meet_0.jpg?offset=1&height=120&width=144&cropratio=1.2:1&image=/img/content/07052012pranitha/pranitha_hot_in_saguni_movie_press_meet_0.jpg refer_url: http://www.thinkmovie.in/gallery/hotactress/album/pranitha_hot_stills_19012012pranitha/ {&image replaced by %E2%84%91} I'm not able to understand how this is happening. I checked my code server times. And I am not able to replicate this problem from my browser. Please help me.

    Read the article

  • RequireJS: JavaScript for the Enterprise

    - by Geertjan
    I made a small introduction to RequireJS via some of the many cool new RequireJS features in NetBeans IDE. I believe RequireJS, and the modularity and encapsulation and loading solutions that it brings, provides the tools needed for creating large JavaScript applications, i.e., enterprise JavaScript applications. &amp;amp;lt;span id=&amp;amp;quot;XinhaEditingPostion&amp;amp;quot;&amp;amp;gt;&amp;amp;lt;/span&amp;amp;gt; (Sorry for the wobbly sound in the above.) An interesting comment by my colleague John Brock on the above: One other advantage that RequireJS brings, is called lazy loading of resources. In your first example, everyone one of those .js files is loaded when the first file is loaded in the browser. By using the require() call in your modules, your application will only load the javascript modules when they are actually needed. It makes for faster startup in large applications. You could show this by showing the libraries that are loaded in the Network Monitor window. So I did as suggested: Click the screenshot to enlarge it and notice how the Network Monitor is helpful in the context of RequireJS troubleshooting.

    Read the article

  • Do you think we will ever settle on a "standard" platform? [closed]

    - by GazTheDestroyer
    The recent explosion of phone platforms has depressed me (slightly), and made me wonder if we will ever reach any kind of standard for presentation? I don't mean language or IDE. Different languages have different strengths and I can see that there may always be a need for disparity, although I do note that languages are merging somewhat in functionality, with traditional imperitive languages like C++ now supporting things like lambdas. What I'm really talking about is a common presentation mechanism. Before smart phones and tablets came along, the web seemed to be finally becoming a reasonable platform for presenting an application that was globally accessible, not just geographically, but by platform too. Sure there are still (sometimes infuriating) implementation differences and quirks, but if you wrote a decent site you knew it could be accessed on anything from a PC to a phone to a C64 running the right software. "Write Once Run Anywhere" seemed to finally be becoming a reality. However, in the last few years we've seen an explosion of mobile operating systems, and the ubiquitous "app". A good site is no longer enough, you need a native "app", and of course we have a sudden massive disparity in OS, language, and APIs needed to write them as each battles for supremecy. It's kind of weird how the cycle of popularity goes. Mainframes with terminals - thin client. PC - thick client. Web browser - thin client. Phone app - thick(ish) client. I just wonder if you think there will ever be a global standard for clients, or whether the "shiny and different" cycle will always continue along with the battle of the tech du jour.

    Read the article

  • How can I fix the #c3284d# malvertising hack on my website?

    - by crm
    For the past couple of weeks at semi regular intervals, this website has had the #c3284d# malware code inserted into some of its .php files. Also the .htaccess file had its equivelant code inserted. I have, on many occasions removed the malicious code, replaced files, changed the ftp password on my ftp client (which is CoreFTP), changed the connection method to FTPS for more secure storage of the password (instead of plain text). I have also scanned my computer several times using AVG and Windows Defender which have found no malware on my computer which might have been storing my ftp passwords. I used Sucuri SiteCheck to check my website which says my website is clean of malware which is bizarre because I just attempted to click one of the links on the site a minute ago and it linked me to another one of these random stats.php sites, even though it appears I have gotten rid of the #c3284d# code again (which will no doubt be re-inserted somehow in an hour or so).. Has anyone found an actual viable solution for this malware hack? I have done just about all of the things suggested here and here and the problem still persists. Currently when I click on a link within the sites navigation menu within Google Chrome I get googles Malware warning page: Warning: Something's Not Right Here! oxsanasiberians.com contains malware. Your computer might catch a virus if you visit this site. Google has found that malicious software may be installed onto your computer if you proceed. If you've visited this site in the past or you trust this site, it's possible that it has just recently been compromised by a hacker. You should not proceed. Why not try again tomorrow or go somewhere else? We have already notified oxsanasiberians.com that we found malware on the site. For more about the problems found on oxsanasiberians.com, visit the Google Safe Browsing diagnostic page. I'm wondering if it is possible that the Google Chrome browser I am using has itself been hacked? Does anyone else get re-directed when clicking links on the the website?

    Read the article

  • Free Xsigo Technical Pre-sales workshop for Selected Partners !

    - by mseika
    In 2012 Oracle acquired Xsigo, a developer of network I/O virtualisation solutions. This acquisition compliments Oracle’s extensive virtualisation portfolio. With Oracle Virtual Networking products (Xsigo) you can: Virtualise connectivity from any server to any storage and any network. Reduce datacentre complexity by 70% Cut infrastructure expenses by up to 50% Benefits to Channel Partners: Offer a unique proposition that your competitors can’t match. Provide an innovative solution that delivers more performance at less cost. High margins that help sell more products and services. This course is aimed at Technical Pre-Sales Consultants equipping them to provide detailed demos, and architect RFP feedback and customer solutions. The language of this event is French. WHEN24th September 2013 WHEREOracle France 15, boulevard Charles De Gaulle92715 COLOMBES FEESFree of charge 09.00: Welcome, Coffee & Introduction 09.30: Value Propositions, Architecture & Use Cases 11.30: Build a OVN Web Quote & TCO 12.30: Lunch 13.30: Competitive Summary 14.00: Design Scenario Workshop 15.45: Questions/Opportunities  REGISTRATION: Register via this link as soon as possible, 14th june, latest. Note that we have only 20 seats in total for this event. Note that after 14th june we will release free seats for other organizations to register. We look forward to your participation! What we expect from you: You will bring your own laptop. Recommended browser is Firefox 10 ESR. You have checked the material and conducted the assessments. You will be flexible in terms of Agenda and Progress as we intend this to be more of a Workshop having Dialogue rather than sticking tightly into the tentative timeline. What this is not: This PartnerLab does not replace Oracle University Trainings. This PartnerLab does not lead to a Certification as such. This PartnerLab does not enable Partners to full and complete implementation skills.

    Read the article

  • Wifi interface changes name seemingly at random

    - by ray_voelker
    I'm currently having some issues getting a wireless interface to work continuously under an install of Ubuntu 12.04.1 LTS. Some of the issues I'm experiencing include Connection will drop out after some time after it has initially worked. Interface will be a different name after a reboot. For example, wlan0 will become wlan4 when using the ifconfig -a command. Ubuntu will take a long time to boot, looking for network adapters. The purpose of this build is to function as a web kiosk in a library. The computer is supposed to boot up into a web browser, and allow for browsing of the catalog. For some reason this interface does not appear to be working as it should. Are there any explanations for some of these problems I'm having, and perhaps some solutions? The wireless card appears as this after doing an lspci ... Ralink corp. RT2561/RT61 802.11g PCI In the /etc/network/interfaces file I have the following configuration for the interface. auto wlan0 iface wlan0 inet dhcp wireless-essid UDwireless wireless-mode Managed Thanks in advance for help on this.

    Read the article

  • webgame engine how does it works

    - by TWCrap
    Hy all, first off all, don't yell that i shouldn't start with it, i just want to know how that works... The thing is, how does the engine of an webgame works. A game like tribalwars, grepolis and forge of empires. How does that keeping alive work. I mean, a user is building an building, and quit the browser... The building is build even when the session of the user is expired. but the points of the user is updated when the building is finished... So how does that works. What do you guys think? do they have some kind of cronjob that is fired every second, and that walks throug the database, and search for finished buildings, and update's the stuff? or do you guys think that they do it difrent?!? I hope that i was clear. -NOTE- i don't need anny code, i'm just intrested in the progress behind the game... Greetingz Marc

    Read the article

  • Can I make a business in teaching home users Ubuntu [on hold]

    - by Dorgaldir
    I was thinking about a way to bring Ubuntu to the bigger public, since it has great advantages for people in the lower income class that only use a PC for basic usage. They pay for a windows licence without actually needing windows because 95% of what they do is in a browser and the other 5% is typing a word document or making a simpel Excel sheet. So for these people something like Ubuntu is ideal, they can prolong the life of their old PC or laptop with Ubuntu and thus saving extra money. And as we all know, saving money is not only interesting for the lowest of income but for most of us. But when I talk to people they don't want to use Ubuntu because they know Windows and they don't know this, they'll complain about having to adapt to windows 8 but adapting to Ubuntu seems a bridge too far. But what if someone in the neighborhood gave simple Ubuntu courses. Teaching people about Ubuntu, stuff like: What is an OS What is Ubuntu How do I obtain Ubuntu How do I install Ubuntu How do I set up my email in Ubuntu How do I make a text document in Ubuntu How do I update my facebook wall in Ubuntu ... Simple basic PC usage, but within Ubuntu. But as much as I would like to work for free all day, I can't do this for free for people outside of my social circle. So I was wondering if it is possible to make a business and make money with giving Ubuntu courses, or are their steps to be taken before this is possible. However... Do I need an Ubuntu or Canonical license? Do I need to get a certificate? Do I have to make some kind of deal or contract with Canonical? Just to be clear this is all just an idea in my head at this point, I'm just gathering information. I'm not a teacher at a school, just a programmer that is thinking about options in life. Thanks in advance!

    Read the article

  • How does 301 redirection work across the network? & should I use it if there is a chance we made need to change the resource back to the original URL?

    - by Faust
    I've built a CMS that makes it fairly easy for my client to relocate pages in their site hierarchy. This site has all human-readable and intuitive URLs, so moving a page necessarily means that its URL changes. I am storing records of each resource's past URLs in the data store so that requests for bygone URLs are re-routed to their appropriate successors. I'm warning my clients not to re-arrange the site willy-nilly (for numerous reasons). But nevertheless I suspect there's a chance page moves could get reversed from time to time. So I'm trying to figure out whether 301 or 302 or 307 redirects should be used when serving up pages to requests for out-of-date URLs. I understand the value of using 301 for search engine optimization. But my concern is with this system possibly inadvertently making some pages unavailable to some users QUESTIONS: That is, if the clients move a page at location/URL A to a new location B, then users get the redirect for A to B, and then the clients move the page back to A again, how long can I expect any of those users to keep getting their requests for A redirected to B -- in this case sending them to my friendly 404 page? Is it until an item in their browser history is cleared? Is the redirect somehow cached in routers throughout the internet? How does this work? How long can I expect the 301 redirect to linger out there ?

    Read the article

  • Is sending data to a server via a script tag an outdated paradigm?

    - by KingOfHypocrites
    I inherited some old javascript code for a website tracker that submits data to the server using a script url: var src = "http://domain.zzz/log/method?value1=x&value2=x" var e = document.createElement('script'); e.src = src; I guess the idea was that cross domain requests didn't haven't to be enabled perhaps. Also it was written back in 2005. I'm not sure how well XmlHttpRequests were supported at the time. Anyone could stick this on their website and send data to our server for logging and it ideally would work in most any browser with javascript. The main limitation is all the server can do is send back javascript code and each request has to wait for a response from the server (in the form of a generic acknowledgement javascript method call) to know it was received, then it sends the next. I can't find anyone doing this online or any metrics as to whether this faster or more secure than XmlHttpRequests. I don't know if this is just an old way of doing things or it's still the best way to send data to the server when you are mostly trying to send data one way and you need the best performance possible. So in summary is sending data via a script tag an outdated paradigm? Should I abandon in favor of using XmlHttpRequsts?

    Read the article

  • Package Manager cannot access repositories but internet is working

    - by kazman
    I am currently at a conference in another country and my package manager cannot access repositories. My internet is working fine and I can ping the repositories or go to them in a browser, but package manager fails to access them. If I sudo apt-get update it throws Something wicked happened resolving 'wwwproxy:3128' (-5 - No address associated with hostname) (or Ign's). This proxy corresponds to my proxy at my office back at home, but I have disabled proxy in the package manager. Scanning for best repository doesn't work either, it doesn't manage to connect to any. I have searched for this online and have checked things about my apt.conf file. My apt.conf contains: Acquire::http::proxy "http://wwwproxy:3128/"; Acquire::https::proxy "https://wwwproxy:3128/"; Acquire::ftp::proxy "ftp://wwwproxy:3128/"; Acquire::socks::proxy "socks://wwwproxy:3128/"; If I remove apt.conf (or replace with blank), it makes no difference. I don't see that it should since I am connecting directly (and have set it so in my network options in Package manager network settings) I have also tried some things with resolv.conf (changing name address to primary and secondary dns) to no avail. (im not sure if this would help, following other advice) I am running 12.04. (I wrote this very quickly and wrote down everything I have tried to possibly shorten the troubleshooting process, have very limited time between lectures and need this sorted asap, my apologies)

    Read the article

  • Issue updating domain name servers from BlueHost to AWS

    - by cowls
    I am trying to migrate my site hosting from bluehost to AWS cloud based service. I have the site up and running on AWS with an elastic IP configured, it loads fine when I specify the IP address in the browser. I have gone into Route 53 on the AWS console and created a "hosted zone" for the domain. I then created a new record set of type "A" using the IP address as the value. I have a domain name registered with bluehost. Ive logged into the bluehost account and updated the domain name servers to point to those specified in Route 53 in the AWS console. When I hit the IP address directly the site loads, however it doesn't load when using the domain name (I get a google chrome oops error page saying page is not found) I've tried using this site: http://dns.squish.net/ to debug but it seems to be giving me the correct results. fizaclegems.com 300 IN A 107.20.209.78 Where 107.20.209.78 matches the elastic IP configured in the AWS console. This is the result it gives for all 4 name servers. Am I missing a step here? Does anyone know what else I should be doing or looking for?

    Read the article

  • NFS mount of /var/www to OS X

    - by ploughguy
    I have spent 2 hours trying to create an NFS mount from my Ubuntu 10.04 LTS server to my OS X desktop system. Objective: three way file compare between the code base on the Mac, the development system on the local Linux test system, and the hosted website. The hosted service uses cpanel so I can mount a webdisk - easy as pie - took 10 seconds. The local Ubuntu box, on the other hand - nothing but pain and frustration. Here is what I have tried: In File Browser, navigate to /var/www/site and right-click. Select share this folder. Enter sharename wwwsite and a comment. Click button "Create Share". Message says - you can only share file systems you own. There is a message on how to fix this, but the killer is that this is sharing by SMB. It will change the LFs to CR-LFs which will affect the file comparison. So forget this option. In a terminal window, run shares-admin (I have not been able to convince it to give me the "Shared Folders" option in the System Administration window - Maybe it is somewhere else in the menu, but I cannot find it) define an NFS export. Enter the path /var/www/site, select NFS enter the ip address of the iMac and save. On the mac, try to mount the file system using the usual methods - finder, command line "mount" command - not found. Nothing. Tried restarting the linux box in case there is a daemon that needs restarting - nothing. So I have run out of stuff to do. I have tried searching the documentation - it is pretty basic. The man page documentation is as opaque as ever. Please, oh please, will someone help me to get this @38&@^# thing to work! Thanks for reading this far... PG.

    Read the article

  • Game 30% done on HTML5. Maybe it was a bad idea. Should I change to Unity3d? [on hold]

    - by Dokkat
    I'm creating a 3d game on HTML5. It's 30% complete and the hard part is already coded. The server is on node.js.Now I'm realizing that maybe it was not a wise choice. This is because I realized: Three.js still has many bugs. I don't see the same thing on every machine. Each browser, OS, can give different results. I'm afraid my clients will have a great stress installing my game properly. I have tons of sprites and models on my game. I wonder if my clients will have to load all them again everytime they want to play? I wonder if a Node.js server will be fast enough to handle it, and I'm afraid it won't be scalable. What would you advise me? Should I continue and finish the game on HTML5 or is it better to remake it on something else, like Unity3d for the client and (what?) for the server?

    Read the article

  • How do web servers enforce the same-origin policy?

    - by BBnyc
    I'm diving deeper into developing RESTful APIs and have so far worked with a few different frameworks to achieve this. Of course I've run into the same-origin policy, and now I'm wondering how web servers (rather than web browsers) enforce it. From what I understand, some enforcing seems to happen on the browser's end (e.g., honoring a Access-Control-Allow-Origin header received from a server). But what about the server? For example, let's say a web server is hosting a Javascript web app that accesses an API, also hosted on that server. I assume that server would enforce the same-origin policy --- so that only the javascript that is hosted on that server would be allowed to access the API. This would prevent someone else from writing a javascript client for that API and hosting it on another site, right? So how would a web server be able to stop a malicious client that would try to make AJAX requests to its api endpoints while claiming to be running javascript that originated from that same web server? What's the way most popular servers (Apache, nginx) protect against this kind of attack? Or is my understanding of this somehow off the mark? Or is the cross-origin policy only enforced on the client end?

    Read the article

  • Release Notes for 3/2/2012

    Here are the notes for today’s release: Added a progress indicator when saving issues. Added support for viewing CodePlex RSS feeds in Chrome. Deployed several bug fixes: Fixed an issue where the back button on Internet Explorer was not working as intended when browsing code. Fixed an issue where long commit comments would push the source control info box outside of the boundaries of the page. Fixed an issue where Internet Explorer users were not able to widen the frame of the source code browser until a file was selected. Fixed an issue where opening a source code file directly from a URL in Internet Explorer would cause the source code tree to be collapsed. Fixed an issue where adding a code snippet with long lines of text to a discussion thread using Internet Explorer would needlessly display a vertical scrollbar, limiting the amount of code visible. Fixed an issue where tabbing through some links would render them invisible. We deprecated support for embedding PreEmptive analytics statistics on the project statistics page. If you’re interested in collecting and reporting your own statistics, PreEmptive’s RunTime Intelligence Endpoint Starter Kit offers a good starting point for capturing data. Have ideas on how to improve CodePlex? Visit our ideas page! Vote for your favorite ideas or submit a new one. Got Twitter? Follow us and keep apprised of the latest releases and service status at @codeplex.

    Read the article

  • A record not resolving

    - by user1561108
    I have a hosted domain at siteground. On this domain & host I have a subdomain with a wordpress install. I wish to move this blog to another host (hostgator), while keeping the domain with siteground. To do this I created a hosting account at hostgator, got it's ip address and set the A record in siteground's cpanel accordingly: subdomain.example.com 14400 A (ip of hostgator account) Going by this online traceroute tool the records appear to have been updated (over 4 hours ago now) as it now resolves to a theplanet.com server location which hostgator use yet the subdomain is still not resolving from a web browser. The account at hostgator has been setup and is navigable via ip address/~accountname. What's going wrong here? I should add the relevant DNS record at hostgator side looks like this: subdomain.example.com 86400 IN SOA ns483.websitewelcome.com. subdomain.example.com 86400 IN NS ns1.siteground145.com. subdomain.example.com 86400 IN NS ns2.siteground145.com. subdomain.example.com 14400 IN A 74.54.176.3 I'm not sure if the hostgator record should be classed as the SOA record but I don't know enough about it to be sure. Is this the source of the problem?

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • Should I avoid or embrace asking questions of other developers on the job?

    - by T.K.
    As a CS undergraduate, the people around me are either learning or are paid to teach me, but as a software developer, the people around me have tasks of their own. They aren't paid to teach me, and conversely, I am paid to contribute. When I first started working as a software developer co-op, I was introduced to a huge code base written in a language I had never used before. I had plenty of questions, but didn't want to bother my co-workers with all of them - it wasted their time and hurt my pride. Instead, I spent a lot of time bouncing between IDE and browser, trying to make sense of what had already been written and differentiate between expected behavior and symptoms of bugs. I'd ask my co-workers when I felt that the root of my lack of understanding was an in-house concept that I wouldn't find on the internet, but aside from that, I tried to confine my questions to lunch hours. Naturally, there were occasions where I wasted time trying to understand something in code on the internet that had, at its heart, an in-house concept, but overall, I felt I was productive enough during my first semester, contributing about as much as one could expect and gaining a pretty decent understanding of large parts of the product. I was wondering what senior developers felt about that mindset. Should new developers ask more questions to get to speed faster, or should they do their own research for themselves? I see benefits to both mindsets, and anticipate a large variety of responses, but I figure new developers might appreciate your answers without thinking to ask this question.

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >