Search Results

Search found 19913 results on 797 pages for 'bit packing'.

Page 347/797 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Oracle Linux / Symantec Partnership

    - by Ted Davis
    Fred Astaire and Ginger Rogers sang the now famous lyrics:  “You like to-may-toes and I like to-mah-toes”. In the tech world, is it Semantic or is it Symantec? Ah, well, we know it’s the latter. Actually, who doesn’t know or hasn’t heard of Symantec in the tech world? Symantec is thoroughly engrained in Enterprise customer infrastructure from their Storage Foundation Suite to their Anti-Virus products. It would be hard to find anyone who doesn’t use their software. Likewise, Oracle Linux is thoroughly engrained in Enterprise infrastructure – so our paths cross quite a bit. This is why the Oracle Linux  engineering team works with Symantec to make sure their applications and agents are supported on Oracle Linux. We also want to make sure the Oracle Linux / Symantec customer experience is trouble free so customer work continues at the same blistering pace. Here are a few Symantec applications that are supported on Oracle Linux: Storage Foundation Netbackup Enterprise Server Symantec Antivirus For Linux Veritas Cluster Server Backup Exec Agent for Linux So, while Fred and Ginger may disagree on how to spell tomato, for our software customers, the Oracle / Symantec partnership works together so our joint customers experience and hear the sweet song of success.

    Read the article

  • What sort of attack URL is this?

    - by Asker
    I set up a website with my own custom PHP code. It appears that people from places like Ukraine are trying to hack it. They're trying a bunch of odd accesses, seemingly to detect what PHP files I've got. They've discovered that I have PHP files called mail.php and sendmail.php, for instance. They've tried a bunch of GET options like: http://mydomain.com/index.php?do=/user/register/ http://mydomain.com/index.php?app=core&module=global§ion=login http://mydomain.com/index.php?act=Login&CODE=00 I suppose these all pertain to something like LiveJournal? Here's what's odd, and the subject of my question. They're trying this URL: http://mydomain.com?3e3ea140 What kind of website is vulnerable to a 32-bit hex number?

    Read the article

  • Ping isn't acting accurate?

    - by Earlz
    I've been trying to diagnose some latency issues with my internet connection. I've been lagging out of online video games and such, which of course could be their server's fault. So, I've been running ping some. It doesn't indicate anything unusual, but it does act a bit strange. I can start it with something like ping internethost -i 0.1 so that it will send a ton of packets, and every 10-20 seconds it will appear to just freeze for 2 or 3 seconds. The packets are still being received in the right order though, and there is no packet loss. The weirdest thing is that after the little freeze up, it will usually just report a ping time that is about 10-30ms higher than the average. How does this happen? Is ping still being accurate? I'm using Arch Linux. The host I'm pinging is my website, which shouldn't be doing any kind of ping slowing or filtering.

    Read the article

  • Program for drawing with pen tablet, like Salman Khan's one

    - by Halst
    I do a lot of sketching with my pen-tablet. I use Microsoft Paint in Windows 7, and it is just perfect except for bad anti-aliasing. I found some videos of Salman Khan, where his sketching is really smooth and anti-aliased. Do you know what program he might use? You can see a bit of its interface here: http://www.khanacademy.org/press/chronicle.HTML and some more: http://www.khanacademy.org/ http://khanexercises.appspot.com/video?v=GW8ZPjGlk24 Else, you can recommend me something else. I hope to find something like Microsoft Paint in Windows 7, but anti-aliased, or whatever.

    Read the article

  • Adding a CMS to an existing Magento shop

    - by user6341
    I am working on a project for 3 niche stores built on magento (using magento's multi-store function) that each get roughly 50k unique visitors a day. The sites don't currently have a blog or forum or any social networking aspects. Would like to add a cms to each site that can be centrally run and would like it to take over the front end content from Magento. Also would like it to maintain an online blog/publication of sorts with videos, articles, and the like with privileges to edit the content given to a dozen or so people with different privileges. Want to add a forum to each site that is fairly robust and to possibly add some social networking aspects down the road, so extandability and available plugins/mods in each cms is important. Other than shared login between the forums,blog/publication and store, would like to be able to integrate some content from the forums and blog/publication into the store as well. After researching this a bit, I am inclined towards Drupal, but I haven't found any modules to integrate it with Magento. Also, since the blog content will be done by about a dozen nontechnical people, I want something that is very easy to work with. Lastly, since the site gets a good amount of traffic, speed and security are very important. What CMS would you recommend integrating in this context? Deciding between Drupal, Wordpress and Plone. Thanks.

    Read the article

  • Mirror using apt-mirror and exclud certain sections/categories

    - by Onitlikesonic
    I'm currently using apt-mirror to create a local mirror of the debian repositories. As the mirrored repositories will be used only by machines destined to be headless servers and as an effort to reduce the current mirroring size (around 75GB), categories like games and possibly others will never be needed. How can I go about specifying (on the mirror.list perhaps?) what sections/categories I want to be excluded from the mirroring? Maybe a bit subjective, but apart from games what other sections/categories could be "safely" ignored from the mirroring for my environment purposes? My mirror.list looks as below since all the machines are using precise. # MAIN deb-amd64 http://archive.ubuntu.com/ubuntu precise main restricted universe multiverse deb-i386 http://archive.ubuntu.com/ubuntu precise main restricted universe multiverse # SECURITY deb-amd64 http://archive.ubuntu.com/ubuntu precise-security main restricted universe multiverse deb-i386 http://archive.ubuntu.com/ubuntu precise-security main restricted universe multiverse Also, what others would you recommend adding to the list to be mirrored for a relatively stable environment? Again I understand this is subjective, just looking for some pointers. Much appreciated in advance

    Read the article

  • IE8 Windows 7 (64bit) security certificate problem

    - by Steve
    Hi, We have just received some new computers for use in the office (Dell Vostro). They seem to work fine in the main. When we use IE8 to go to some web pages such as yahoo mail it tells us: “There is a problem with this websites security certificate” If we have a look at the details it says: “This certificate cannot be verified up to a trusted certification authority” This however works correctly in Firefox. I don't understand why I should get such an error message, should this not just work? I don't think its anything to do with the certificate itself as this is happening on www.yahoo.co.uk and other commercial (amazon I think?) sites. I think there is something off with the PCs setup. The PC has Windows 7 (64 bit) and Norton Internet Security installed. Any ideas as to why this is happening? Thanks

    Read the article

  • What about introduction to programming with C# via LINQPad?

    - by Gulshan
    From different questions/answers/articles in this and some other sites, I got the idea that the introductory language for programming should be- High level Less verbose C# is one of the heavily used high level languages being used these days. It's also multi-paradigm and descendant of C, the lingua-franca of all programming languages. So, I think it has the potential to be the introductory programming language. But I felt it's a bit verbose for the novice learners. Then LINQPad came into my mind. With LINQPad, someone can start with C# without it's verbosity. Because you can just run one statement or few statements or a standalone function with LINQPad. Again you can run a full source file also. Another thing it provide is- using SQL. So, it can be used for learning SQL too. And not to mention, it's free. So, what you guys think about the idea of introducing programming with C# via LINQPad? Any thing to watch out? Any suggestion?

    Read the article

  • steam won't open after install

    - by Dan Cooper
    I've looked all over the place for a solution but no one seems to be getting the same error codes as me. When I try to run Steam through terminal I get the following error: Running Steam on ubuntu 13.04 64-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1367621987_client) Installing breakpad exception handler for appid(steam)/version(1367621987_client) unlinked 0 orphaned pipes Gtk-Message: Failed to load module "overlay-scrollbar" Installing breakpad exception handler for appid(steam)/version(1367621987_client) [1013/104817:WARNING:proxy_service.cc(646)] PAC support disabled because there is no system implementation /home/buildbot/buildslave_steam/steam_rel_client_ubuntu12_linux/build/src/steamUI/../common/steam/client_api.cpp (281) : Assertion Failed: ClientAPI_InitGlobalInstance: InternalAPI_Init_Internal failed. Assert( Assertion Failed: ClientAPI_InitGlobalInstance: InternalAPI_Init_Internal failed. ):/home/buildbot/buildslave_steam/steam_rel_client_ubuntu12_linux/build/src/steamUI/../common/steam/client_api.cpp:281 Installing breakpad exception handler for appid(steam)/version(1367621987_client) Uploading dump (out-of-process) [proxy ''] /tmp/dumps/assert_20131013104817_1.dmp /home/buildbot/buildslave_steam/steam_rel_client_ubuntu12_linux/build/src/steamUI/SteamStartup.cpp (627) : Assertion Failed: ! "There was a problem with your Steam installation.\n" "Please reinstall steam.\n" unlinked 2 orphaned pipes CAsyncIOManager: 0 threads terminating. 0 reads, 0 writes, 0 deferrals. CAsyncIOManager: 75 single object sleeps, 0 multi object sleeps CAsyncIOManager: 0 single object alertable sleeps, 1 multi object alertable sleeps [2013-10-13 10:48:16] Startup - updater built May 3 2013 15:08:27 [2013-10-13 10:48:16] Verifying installation... [2013-10-13 10:48:16] Verification complete Shutting down. . . [2013-10-13 10:48:17] Shutdown Finished uploading minidump (out-of-process): success = yes response: CrashID=bp-d172a742-b7dd-419c-b235-d60c32131013 I've tried sudo apt-get purge and terminal tries to tell me I don't have Steam installed. I've tried reinstalling with software center but that doesn't help either.

    Read the article

  • My Windows 8 computer does not come out of Standby

    - by Jikag
    My computer will not come out out of standby. It has had this issue ever since I bought it ~4 years ago. When I purchased it, It came with Vista, I have since upgraded it to 7 and now 8 Pro, each time hoping that it would fix the problem, each time finding that it did not. My computer is an HP Pavilion Model #: m9500y. The computer goes into standby, but does not come out of it properly. If Hybrid sleep in on, it does not respond at all. If Hybrid sleep is off, It comes out and I can see the desktop for a bit, but then the screen goes black again and I have to reboot. I have tried Running a sfc, and a chkdsk in case it's a corrupted file, but it hasn't helped. This isn't too serious an issue because I can just shutdown my computer and turn it on the old fashioned way, but it is annoying and I would appreciate some advice going forward as I'm currently stumped.

    Read the article

  • kernel software trap handling

    - by Tony
    I'm reading a book on Windows Internals and there's something I don't understand: "The kernel handles software interrupts either as part of hardware interrupt handling or synchronously when a thread invokes kernel functions related to the software interrupt." So does this mean that software interrupts or exceptions will only be handled under these conditions: a. When the kernel is executing a function from said thread related to the software exception(trap) b. when it is already handling a hardware trap Is my understanding of this correct? The next bit: "In most cases, the kernel installs front-end trap handling functions that perform general trap handling tasks before and after transferring control to other functions that field the trap." I don't quite understand what it means by 'front-end trap handling functions' and 'field the trap'? Can anyone help me?

    Read the article

  • Ulimit settings in Oracle 11g on Linux 5

    - by Stuart
    Is there an issue with "Ulimit -Hn" being set too low (at 1024) when (Oracle recommend 65536)? This is for Oracle 64-bit 11g on Linux 5. It is one of the settings that appears to be woefully short of its recommendation. But I am also aware that the database server in question is an Oracle Data Guard Local Standby and should only really have a connection or two from its Primary database server (to ship the redo logs across). The Local Standby database server has 'hung' about 3 times in as many months and then requires a reboot. I do not have access to this server, so rely on others to look at logs etc. The sanity check on kernel params uncovered the low value for "ulimit -Hn". Has anyone ever seen that 'low' value cause a hang or crash?

    Read the article

  • How to Best Optimize up Model Transforms, Import 3DS Animations Into XNA 4.0?

    - by Jason R. Mick
    Relative beginner to XNA, but trying to build a multi-purpose (3D) game frameworking in XNA 4. Been using the Reed (O'Reilly) and Cawood/McGee (McGraw Hill) guides. My question is multi-faceted and involves how to most efficiently handle models. I'm using 3DS Max 2010 with kw-Xport to ship out my models as .X files. Solved an early problem by using my depth stencil state. My models are now loading properly (yay!) and I have basic bounding working, I just want to optimize transforming models and get animations working as a next step. My questions on models are: 1. Do you have any suggestions for good resources on exporting 3DS animations to XNA? I've seen some resources on how to handle animations in XNA, but most skimp on basic topics of how to convert multi-animation 3DS files. For example how do I take one big long string of keyframed animations (say running, frame 5-20, climbing frames 25-45, etc.) and turned them into named XNA animations. To my understanding every XNA animation has to have a name, but I haven't seen any tutorials on creating a new named animation from a subset of frames. 2. Is it faster to load a model once and animate/transform that base model on the fly @ draw time, or to load multiple models? My game will have multiple enemies, and I've already seen some lagginess in XNA, so II want to make my code efficient... 3. I've heard people on app hub talking about making custom content processors for models-- what is the benefit of this? Does it speed up transforming or animating the models? If so, can you point me towards any good (model-centric) tutorials? (I've built a custom height map content processor to generate terrain, following Cawood's examples, I'm just a bit confused as to how a model content processor would be implemented.)

    Read the article

  • Performance-Driven Development

    - by BuckWoody
    I was reading a blog yesterday about the evils of SELECT *. The author pointed out that it's almost always a bad idea to use SELECT * for a query, but in the case of SQL Azure (or any cloud database, for that matter) it's especially bad, since you're paying for each transmission that comes down the line. A very good point indeed. This got me to thinking - shouldn't we treat ALL programming that way? In other words, wouldn't it make sense to pretend that we are paying for every chunk of data - a little less for a bit, a lot more for a BLOB or VARCHAR(MAX), that sort of thing? In effect, we really are paying for that. Which led me to the thought of Performance-Driven Development, or the act of programming with the goal of having the fastest code from the very outset. This isn't an original title, since a quick Bing-search shows me a couple of offerings from Forrester and a professional in Israel who already used that title, but the general idea I'm thinking of is assigning a "cost" to each code round-trip, be it network, storage, trip time and other variables, and then rewarding the developers that come up with the fastest code. I wonder what kind of throughput and round-trip times you could get if your developers were paid on a scale of how fast the application performed... Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Can't find my.cnf on my VPS

    - by dan
    Ok i am a total noob when it comes to servers (but eager to learn). I am renting a VPS so i can host a magento store. The VPS is using Centos5 and DirectAdmin and XEN virtualization. I've read a bit about how to optimize magento and one suggestion is to edit 'my.cnf'. However i can't find this file anywhere from within DirectAdmin. I also can't connect to the VPS via console as my host has a console access via their website but it won't let me enter my root password it just hangs...(how do people normally connected to their linux VPS?) Please help? ThankYou.

    Read the article

  • apache2 and php slow first load on Ubuntu VPS - something like mysqltuner but for apache?

    - by talkingnews
    Ubuntu 10.10 64 bit VPS, 512Mb dedicated RAM. Mysql tuned so that sqltuner is completely happy. Used RAM never above 350Mb out of the 493 available. Load never exceeds 1.04 or so. httpd.conf tuned as per all the guides for vps of that memory - amount of preforks, spares etc. But for the FIRST load a site after having not visited for a while, it's taking ages. First load: Parse Time: 3.576 - Number of Queries: 50 - Query Time: 0.019723195953369 Reload Parse Time: 0.096 - Number of Queries: 39 - Query Time: 0.0066126374511719 Subsequent reloads will be at this speed. htop shows two items as soon as I load that page for the first time: php-cgi /usr/sbin/apache2 -k start I'm using suPHP but I've tried fast-cgi and cgi. Stuck now, a weekend of tweaking has brought me nothing. Advice appreciated.

    Read the article

  • What is the advantage of currying?

    - by Mad Scientist
    I just learned about currying, and while I think I understand the concept, I'm not seeing any big advantage in using it. As a trivial example I use a function that adds two values (written in ML). The version without currying would be fun add(x, y) = x + y and would be called as add(3, 5) while the curried version is fun add x y = x + y (* short for val add = fn x => fn y=> x + y *) and would be called as add 3 5 It seems to me to be just syntactic sugar that removes one set of parentheses from defining and calling the function. I've seen currying listed as one of the important features of a functional languages, and I'm a bit underwhelmed by it at the moment. The concept of creating a chain of functions that consume each a single parameter, instead of a function that takes a tuple seems rather complicated to use for a simple change of syntax. Is the slightly simpler syntax the only motivation for currying, or am I missing some other advantages that are not obvious in my very simple example? Is currying just syntactic sugar?

    Read the article

  • Asus Eee PC 1000HE wireless woes

    - by Vladimir Noobokov
    Ever since I have upgraded my Asus Eee PC 1000HE from Lucid 10.04 to Precise 12.04 I have been having issues with my wireless connections. At first I had wireless dropouts: I would be able to start using wireless, but then after a few minutes the wireless would stop working even though I was still connected to the network. Lately things turned worse: while I connect to my wireless network, it just never works. I tried all sorts of solutions on offer here and in other forums but none worked. At best I got the wireless to work up until I rebooted, at which point I would get the same symptoms again: the wireless network is there, but it's not really working. By now I tried so many different "solutions" I don't know where to start describing them; I have also reinstalled 12.04 several times, enough to make me lose faith in Ubuntu. Help here looks like my last resort. For the record, my Asus Eee PC 1000HE is equipped with an Atheros wireless card. I have reinstalled 12.04, ran all the suggested updates, and receive the following response when I type iwconfig in the terminal: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Arsenal" Mode:Managed Frequency:2.452 GHz Access Point: 00:04:ED:48:67:89 Bit Rate=1 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-29 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:27 Invalid misc:57 Missed beacon:0 eth0 no wireless extensions. Thanks in advance for any help that might be offered.

    Read the article

  • API Wordpress & Inksoft

    - by user105405
    I am new to this whole website design and API bit. My husband has bought a license for the program InkSoft. Their site does not offer very much customization, so we decided to buy a Wordpress.org site that is hosted through godaddy. With all of that said, I am trying to figure out a way to take the products that are on InkSoft's website, which get their information from the suppliers' warehouses (for things like inventory), and put them on the Wordpress site. There is an area on InkSoft where I can access "Store API...API feeds." I guess I am just confused on where to put this type of stuff in the WordPress site or how to put it in there? If I go to the "Products" area on this Store API, I am given a URL that deals with the product stuff and I am also given a HUGE list of stuff that contains stuff such as: < ProductCategoryList < vw_product_categories product_category_id="1000076" name="Most Popular" path="Most Popular" thumburl_front="REPLACE_DOMAIN_WITH/images/publishers/2433/ProductCategories/Most_Popular/80.gif" / Can everyone give me directions on what and how to use all this stuff? Thank you!!

    Read the article

  • Can not change to a static IP in Fedora 19

    - by user196272
    Im having a bit of a weird situation. Ive installed Fedora Linux 19 onto a virtual machine with no GUI. initially eth0 does not show up when I perform ifconfig. when I run dmesg | grep eth I see the adapter but it says it changed names to p2p1. Once I perform the ifconfig p2p1 up command it shows up. Now when I try to edit the /etc/sysconfig/network-scripts/ifcfg-p2p1, it does not exist. the only scripts that are there lo and enp0s3. If I try to create the ifcfg-p2p1 file with the correct settings, I can not restart the network service. I tried editing the enp0s3 file, but that did not work. Im fairly new to linux and not sure what else to put in here, so if you need any more information just let me know and Ill put it in here.

    Read the article

  • where to look for computer technician jobs

    - by Kareem
    Hi I am currently studying for the A+ certification, I plan to have it by the end of this month and I plan to go for farther education. I’ve built two high end computers by myself for a friend and family member. Install OS and everything. I’m looking in to finding either a computer assembly or computer technician job . Where is the best place to look for one? I’ve looked in to best buy but I find their geek squad to be a little bit shady. Where is a good place to look for a full time entry level computer technician job just starting out in Tampa, FL?

    Read the article

  • Cannot access Application configured on local IIS 7 using IP/machine name

    - by SilverHorse
    I have a windows 7 machine 64 bit and IIS 7 I have a default website on the IIS.Its binding is {IP: All Unassigned , Port:80 , Host Name : blank} I have added a new asp.net application to that website,mapped physical path, have set the virtual path as "MyWebApp". Application pool for "MyWebApp" is "DefaultAppPool" {.Net Framework: 4.0 ; Managed Pipeline Mode: Classic} The problem I am facing is I can access the website using http://localhost, http://IP.IP.IP.IP and http://MyMachineName But I can not access the Application other than using http://localhost/MyWebApp What should I do if I want to access the webapp using http://MyMachineName/MyWebApp OR http://IP.IP.IP.IP/MyWebApp Please note : I have already created an inbound rule to allow all HTTP traffic for port 80 in firewall settings.

    Read the article

  • Install NPM Packages Automatically for Node.js on Windows Azure Web Site

    - by Shaun
    In one of my previous post I described and demonstrated how to use NPM packages in Node.js and Windows Azure Web Site (WAWS). In that post I used NPM command to install packages, and then use Git for Windows to commit my changes and sync them to WAWS git repository. Then WAWS will trigger a new deployment to host my Node.js application. Someone may notice that, a NPM package may contains many files and could be a little bit huge. For example, the “azure” package, which is the Windows Azure SDK for Node.js, is about 6MB. Another popular package “express”, which is a rich MVC framework for Node.js, is about 1MB. When I firstly push my codes to Windows Azure, all of them must be uploaded to the cloud. Is that possible to let Windows Azure download and install these packages for us? In this post, I will introduce how to make WAWS install all required packages for us when deploying.   Let’s Start with Demo Demo is most straightforward. Let’s create a new WAWS and clone it to my local disk. Drag the folder into Git for Windows so that it can help us commit and push. Please refer to this post if you are not familiar with how to use Windows Azure Web Site, Git deployment, git clone and Git for Windows. And then open a command windows and install a package in our code folder. Let’s say I want to install “express”. And then created a new Node.js file named “server.js” and pasted the code as below. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7: 8: console.log("Web application opened."); 9: app.listen(process.env.PORT); If we switch to Git for Windows right now we will find that it detected the changes we made, which includes the “server.js” and all files under “node_modules” folder. What we need to upload should only be our source code, but the huge package files also have to be uploaded as well. Now I will show you how to exclude them and let Windows Azure install the package on the cloud. First we need to add a special file named “.gitignore”. It seems cannot be done directly from the file explorer since this file only contains extension name. So we need to do it from command line. Navigate to the local repository folder and execute the command below to create an empty file named “.gitignore”. If the command windows asked for input just press Enter. 1: echo > .gitignore Now open this file and copy the content below and save. 1: node_modules Now if we switch to Git for Windows we will found that the packages under the “node_modules” were not in the change list. So now if we commit and push, the “express” packages will not be uploaded to Windows Azure. Second, let’s tell Windows Azure which packages it needs to install when deploying. Create another file named “package.json” and copy the content below into that file and save. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*" 6: } 7: } Now back to Git for Windows, commit our changes and push it to WAWS. Then let’s open the WAWS in developer portal, we will see that there’s a new deployment finished. Click the arrow right side of this deployment we can see how WAWS handle this deployment. Especially we can find WAWS executed NPM. And if we opened the log we can review what command WAWS executed to install the packages and the installation output messages. As you can see WAWS installed “express” for me from the cloud side, so that I don’t need to upload the whole bunch of the package to Azure. Open this website and we can see the result, which proved the “express” had been installed successfully.   What’s Happened Under the Hood Now let’s explain a bit on what the “.gitignore” and “package.json” mean. The “.gitignore” is an ignore configuration file for git repository. All files and folders listed in the “.gitignore” will be skipped from git push. In the example below I copied “node_modules” into this file in my local repository. This means,  do not track and upload all files under the “node_modules” folder. So by using “.gitignore” I skipped all packages from uploading to Windows Azure. “.gitignore” can contain files, folders. It can also contain the files and folders that we do NOT want to ignore. In the next section we will see how to use the un-ignore syntax to make the SQL package included. The “package.json” file is the package definition file for Node.js application. We can define the application name, version, description, author, etc. information in it in JSON format. And we can also put the dependent packages as well, to indicate which packages this Node.js application is needed. In WAWS, name and version is necessary. And when a deployment happened, WAWS will look into this file, find the dependent packages, execute the NPM command to install them one by one. So in the demo above I copied “express” into this file so that WAWS will install it for me automatically. I updated the dependencies section of the “package.json” file manually. But this can be done partially automatically. If we have a valid “package.json” in our local repository, then when we are going to install some packages we can specify “--save” parameter in “npm install” command, so that NPM will help us upgrade the dependencies part. For example, when I wanted to install “azure” package I should execute the command as below. Note that I added “--save” with the command. 1: npm install azure --save Once it finished my “package.json” will be updated automatically. Each dependent packages will be presented here. The JSON key is the package name while the value is the version range. Below is a brief list of the version range format. For more information about the “package.json” please refer here. Format Description Example version Must match the version exactly. "azure": "0.6.7" >=version Must be equal or great than the version. "azure": ">0.6.0" 1.2.x The version number must start with the supplied digits, but any digit may be used in place of the x. "azure": "0.6.x" ~version The version must be at least as high as the range, and it must be less than the next major revision above the range. "azure": "~0.6.7" * Matches any version. "azure": "*" And WAWS will install the proper version of the packages based on what you defined here. The process of WAWS git deployment and NPM installation would be like this.   But Some Packages… As we know, when we specified the dependencies in “package.json” WAWS will download and install them on the cloud. For most of packages it works very well. But there are some special packages may not work. This means, if the package installation needs some special environment restraints it might be failed. For example, the SQL Server Driver for Node.js package needs “node-gyp”, Python and C++ 2010 installed on the target machine during the NPM installation. If we just put the “msnodesql” in “package.json” file and push it to WAWS, the deployment will be failed since there’s no “node-gyp”, Python and C++ 2010 in the WAWS virtual machine. For example, the “server.js” file. 1: var express = require("express"); 2: var app = express(); 3: 4: app.get("/", function(req, res) { 5: res.send("Hello Node.js and Express."); 6: }); 7:  8: var sql = require("msnodesql"); 9: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:tqy4c0isfr.database.windows.net,1433;Database=msteched2012;Uid=shaunxu@tqy4c0isfr;Pwd=P@ssw0rd123;Encrypt=yes;Connection Timeout=30;"; 10: app.get("/sql", function (req, res) { 11: sql.open(connectionString, function (err, conn) { 12: if (err) { 13: console.log(err); 14: res.send(500, "Cannot open connection."); 15: } 16: else { 17: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 18: if (err) { 19: console.log(err); 20: res.send(500, "Cannot retrieve records."); 21: } 22: else { 23: res.json(results); 24: } 25: }); 26: } 27: }); 28: }); 29: 30: console.log("Web application opened."); 31: app.listen(process.env.PORT); The “package.json” file. 1: { 2: "name": "npmdemo", 3: "version": "1.0.0", 4: "dependencies": { 5: "express": "*", 6: "msnodesql": "*" 7: } 8: } And it failed to deploy to WAWS. From the NPM log we can see it’s because “msnodesql” cannot be installed on WAWS. The solution is, in “.gitignore” file we should ignore all packages except the “msnodesql”, and upload the package by ourselves. This can be done by use the content as below. We firstly un-ignored the “node_modules” folder. And then we ignored all sub folders but need git to check each sub folders. And then we un-ignore one of the sub folders named “msnodesql” which is the SQL Server Node.js Driver. 1: !node_modules/ 2:  3: node_modules/* 4: !node_modules/msnodesql For more information about the syntax of “.gitignore” please refer to this thread. Now if we go to Git for Windows we will find the “msnodesql” was included in the uncommitted set while “express” was not. I also need remove the dependency of “msnodesql” from “package.json”. Commit and push to WAWS. Now we can see the deployment successfully done. And then we can use the Windows Azure SQL Database from our Node.js application through the “msnodesql” package we uploaded.   Summary In this post I demonstrated how to leverage the deployment process of Windows Azure Web Site to install NPM packages during the publish action. With the “.gitignore” and “package.json” file we can ignore the dependent packages from our Node.js and let Windows Azure Web Site download and install them while deployed. For some special packages that cannot be installed by Windows Azure Web Site, such as “msnodesql”, we can put them into the publish payload as well. With the combination of Windows Azure Web Site, Node.js and NPM it makes even more easy and quick for us to develop and deploy our Node.js application to the cloud.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Windows Server 2008 R2 Server Core with AD Role having GUI Admin Console

    - by Robert Koritnik
    I would like to setup a machine with Windows Server 2008 R2 Server Core and install following server roles: Active Directory Domain Services Active Directory Federation Services Active Directory Lightweight Directory Services (I'm not sure whether I actually need this one - see note below) I'm obviously going to install Enterprise Edition. Question Can I have an AD administration graphical user interface to manage Active Directory on Server Core machine? I would really like to have it, because I'm not so keen to do stuff using power-shell, because I've never managed AD as well, so a GUI would be much more helpful, because I could at least visualize it a bit better and maybe understand AD structures. Note: I'm setting up development environment machine as well and installing Sharepoint Foundation 2010 on in so it would use this AD machine.

    Read the article

  • AWS:EC2:: Why my web folder is called "html"??

    - by heathub
    P.S Q stands for Question. My environment is: Amazon linux 64 bit (Q1. i dont if its ubuntu or red-hat, is there any way to check?) And I need to run php and mysql, thus I installed httpd (Q2. is httpd == apache??), but on my default page, it says: please upload files to /var/www/html folder. Q3.This is the first time I set aws ec2 server myself, my previous experience is hosting with hosting company. Normally in hosting company, my web directory is called "www" or "public_html" or "htdocs".Why is my folder name is "/var/www/html"? Am I installed wrong apache?

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >