Search Results

Search found 975 results on 39 pages for 'uploads'.

Page 29/39 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • How do I copy files between harddrives on Ubuntu CLI?

    - by ed209
    I have a dedicated server with a 120gb main ssd. The server happens to come with a couple of 3000GB hard drives. I'd like to use them to back up my main drive. Preferably, I'd like one as an exact copy of the main SSD and the other with incremental backups of the mysql database and a user uploads file. These are the drives I have Disk /dev/sda: 120.0 GB, 120034123776 bytes 255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2e18 Device Boot Start End Blocks Id System /dev/sda1 2048 4196352 2097152+ 83 Linux /dev/sda2 4198400 5246976 524288+ 83 Linux /dev/sda3 5249024 234441647 114596312 83 Linux Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdc doesn't contain a valid partition table The first problem I have, is that I have no idea how to copy from one drive to another. Kind of embarrassing I know, but I don't know where to start. I'm thinking of this in terms of Mac OS cli where I'm able to copy between /Volumes - is there an equivalent? (there is nothing under /mnt or /media)

    Read the article

  • email attachments [closed]

    - by Alan Doolan
    My company currently use software on a local machine that will take an email from the email server, extract the attachment, rename it and then add it to a folder on a webserver using ftp. This works well but they are currently asking if it can be done 'in the cloud' or what they really mean, not local. Is there any thing that would do this on the server itself? I should clarify a bit. The attachements are various reports that are being sent to different email addresses (mostly google corporate and free accounts). We need the reports to be on a folder on a webserver so that internal pages can take the information in the reports (csv) and use it on the webpages or adds them to a separate database. The key part being that the files need to be in the particular folders. Though it does work to have a computer running software that will take the files, renames them to the required name and uploads them to the folder it relies too heavily on one computer working all the time. This is not something we can depend on at this point. I'll be honest, I'm a web developer and not strong with server systems past my particular standard requirements so this is beyond me. though yes, I am aware that my boss is not 100% sure what 'cloud' means but likes the word.

    Read the article

  • What does Firefox do when "scanning for viruses" after download?

    - by Joey
    Never mind the fact that Firefox is a browser and not a AV tool, but what exactly does it do after a download? Even on systems that have an up-to-date AV this generates a pause of several seconds after download (where I can't open the file from within the DL manager) and I have no idea what FF might be trying there. I know I can turn it off (using FF only at work anyway) but I'm wondering. I can think of some things here what it might be: FF itself is a AV scanner and it loads signatures in the background and whatnot. Sounds highly unlikely and shouldn't need tens of seconds for 20 KiB files. FF tries to talk with the installed AV to munch the file. Sounds unneeded, given that most AV programs feature real-time protection anyway and therefore will have caught a virus already and also because FF does that on systems without AV installed too. FF uploads the file to some online virus checker. Unlikely and stupid. FF instructs some online virus checker to download the file and check it. Unlikely and would be a nice target for DoSing that service. FF generates a hash of the file and sends that somewhere (presumably Google) to check for. They then respond with either "Whoa, that hash is totally a virus" or "Nope, that MD5 doesn't look very virus-y to me". I'm running out of better ideas. Anyone have a clue?

    Read the article

  • Slow upload, fast download on Windows 7 64bit system

    - by Malik
    I've got a weird problem in the download speeds on my desktop PC (Windows 7 Home Premium 64bit) are consistently fast (approx. 400kB/s) but uploads are very slow (around 6-10kB/s). This has been going on for the last 3 weeks or so. I am a very competent user and troubleshooter, and have searched online for 2 weeks for a solution, to no avail. Part of the problem is that internet is provided by WiFi by my landlord and I have no access to the router (BT Home Hub router) although I know for sure he wouldn't have the first idea on how to restrict my usage :) (rules that out) Anyway, I've tried: - various drivers (my Wifi 'card' is TP-link TL-WN851N, and I've tried TP-link + Atheros + Qualcomm Atheross drivers, suggested by Microsoft) - various tweaks to network parameters (e.g. as suggested by SpeedOptimser) - various tweaks to Windows 7 services (e.g. disabling/manual-ing unecessary services) - raising and lowering head onto a reasonably firm surface at moderate frequency (jk :D) None of the above have helped, and I'm officialy asking for help now!! Thanks for your time and effort in advance!

    Read the article

  • Need advice in setting up server. fastCGI, suExec, speed, security, etc.

    - by lewisqic
    I am running my own dedicated server with centOS 5 and WHM/cPanel. I would like to configure my server to meet my needs but I need a little help. It will only be my own websites being run on this server. I'm still a little green when it comes to server administration so please forgive my ignorance. What I Would Like to Have: I need some public directories to be writable (for user image uploads and things like that) but I don't want those directories to have 777 permissions. I need individual accounts to have the ability to set custom php settings for their own account without affecting other accounts, whether through a php.ini file or through .htaccess or any other method. I would like things to run as fast as possible, whether that means using a php optimizer or cacher, such as eaccelerator or xcache or anything else. I need things to be as secure as possible. Here Are My Questions What should I use for my php handler? DSO? CGI? fastCGI? suPHP? Other? Should I be using suEXEC? What are the benefits or downfalls of this? What php optimizer/cacher is best to use? Are there any other security tips I need to know about all of this? I'd appreciate any advice or direction that can be offered. Thanks!

    Read the article

  • Understanding Unix Permissions (w/ ACL)

    - by Dr. DOT
    I am trying to set permissions on my server properly. Currently I have a number of directories and files chmod'd at 0777 -- but I am not comfortable with it being this way. So at the advice of a serverfault specialist, I had my hosting provider install ACL on my shared virtual server. When I FTP to the server as my FTP user account "abc", I can do everything I need to do (and rightfully so) because all my dirs and files are owned by "abc", the group is "abc", and the 1st octet is set to 7 (rwx). That much I get. But here's where it gets dark gray for me. PHP is set to user "nobody". so when someone browses on of my web pages that either ends in .php or has some embedded PHP, I assume the last octet controls the access. Because all my dirs and files are owned by "abc" and assigned to group "abc", if the last octet was a 4 (r--) then the server would let the browser read the file. If it were a 6 (rw-) then the server would let the browser also write to the file or directory, correct? what if the web document does not end in .php or does not have any PHP embedded? What is the user then? how can I use ACL to not set the permission to 6 (rw-) or even 7 (rwx)? [not sure what execute does or means] Just looking for some sort of policy settings to best lock down my dirs and files while allowing my PHP scripts to do uploads and write to files (so my users don't call me to tell me "permission denied". Ok, thanks to anyone out there willing to lend me a hand. It is greatly appreciated.

    Read the article

  • Euro character messed up during FTP transfer

    - by djechelon
    My customer is using a very outdated ecommerce management system on my hosting service. For that product, no support is being provided anymore by the vendor. Brief explanation: the shop website, that claims to run under LAMP stack, is built by an old Visual Basic Windows application running on MS Access. The user constructs the shop, defines the HTML template, adds products and categories, etc. Then the VB exe builds the PHP pages (one for each template page) and the SQL script to run on MySQL. It also uploads everything via FTP and runs the installation/upgrade script on its own. The problem Browsing the website, many products' descriptions are cut before the euro sign. For example, what was supposed to be "Product price €1000" becomes "Product price" The analysis MySQL contains a cutted description until the € sign, so it's not PHP fault The Access databases contain full description with € sign, so it's not fault of the webmaster writing bad description or eDisplay cutting them The SQL that will run once the site gets uploaded, stored on my local machine before upload, contains the € sign The same script, after being FTPed by eDisplay and opened with nano from SSH, shows the € sign messed up like this: ^À vsftpd log reports (obfuscated for privacy) Sat Dec 15 11:16:57 2012 22 xxx.xxx.128.13 1112727 /srv/www/domains/xxxxxx.it/htdocs/db.sql b _ i r xxxxxxx ftp 0 * c which seems to be a binary transfer (and also a huge security vulnerability because you can download the whole database from unauthenticated HTTP) The eDisplay internal FTP client provides no option for ascii/binary transfer modes [Add] Trying to manually upload the SQL file via SFTP shows messing up euro [Add2] Trying to manually upload using Xftp client with explicit ASCII mode doesn't fix too It looks like the file gets uploaded as binary. Perhaps on the customer's previous host it all worked fine because that was a Windows host. The server It's an Azure virtual machine running openSUSE 12.2 with both vsftpd and openSSH The question Without asking the customer to manually upload files using FileZilla or replacing € with &euro;, because he refuses, what can I do on server side to prevent vsftpd to screw up euro sign?

    Read the article

  • FTP Server upload and filesystem questions

    - by Alex
    I'm a photographer who mainly does event photography. A while ago I bought myself a Nikon WT-4 wireless transmitter, a small device which connects via USB to my Nikon D700 DSLR, and then establishes a WiFi connection to an existing WLAN. It can then upload any pictures I take via FTP to an FTP server somewhere in the network. On my laptop I then have a piece of software which will check a given folder on the disk regularly, this software is smart enough to look at the modified file timestamp, if this timestamp is less than 10 seconds ago, it will not attempt to import the folder and skip the file in this iteration of the import scan. The problem I've discovered seems to be inherent to the FTP protocol, as I have the same problem with Windows 7 built in IIS server, as I do with FileZilla FTP server. When the transmitter starts to upload a file, the FTP server will create a small 300-500 KB file with the correct filename on the disk, but then do nothing with the file until it has completely received the file via FTP. So it seems to create this small dummy file, and then buffer the remainder of the FTP upload until it's finished, and then dump the rest of the file into the dummy file making it the correct size. Problem is, these uploads take about 15-30 seconds depending on reception, but since the folder watch tool will already try to import any file older than 10 seconds, it will always try to import the small dummy files which obviously fails as they're not copmlete yet. Is there any way to 'disable' this behaviour? Ideally I would like my file only to show up once it's been completely uploaded. Or perhaps someone knows another FTP server application (it has to run on win7) which does not show this behaviour?

    Read the article

  • How should I configure backup of my server?

    - by ed209
    I have just rented a dedicated server. If it helps this is the config I have: CPU1 Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (Cores 8) RAM 15975 MB Disk /dev/sda doesn't contain a valid partition table (=> /dev/sda doesn't) Disk /dev/sdc doesn't contain a valid partition table (=> /dev/sdc doesn't) Disk /dev/sdb doesn't contain a valid partition table (=> /dev/sdb doesn't) Disk /dev/sda: 120.0 GB (=> 114 GIB) Disk /dev/sdc: 3000.6 GB (=> 2861 GIB) Disk /dev/sdb: 3000.6 GB (=> 2861 GIB) /dev/sda is a 120GB SSD. This is where I have Ubuntu/lamp installed. It's the drive that will run my site. With the account I got two other drives of 3000GB each which I really don't need but they came with the account. I figured I could use these to back up my main 120gb drive. So a couple of things I wondered were: Should I use these for backups? How should I back up. The data I want to back up is a user uploads directory full of images and the database. Everything else is either in a code repo or backed up some other way. For example, it would be nice to know there is a disk image of the 120gb drive somewhere that I can copy over should there be any problems but equally I don't mind doing a fresh install of all the software and copying over just the images and database dump. Thanks for your advice! (also, happy to not use the two other drives and backup elsewhere if it's more sensible)

    Read the article

  • Getting PAM/user info into php - something like Net_Finger instead of a db?

    - by digitaltoast
    I've got a very small user group who just need to login, upload, check and then move specific files to a different area when ready. Right now, I use the nginx PAM auth module to log them in against their unix accounts. As their login is their home directory, I've already got the info to send the uploads to the right area - one line of php and no database needed. But I'm maintaining a separate DB just so PHP can welcome them, grab their email and send them an email when processed. Yes, sure I could use nosql or sqlite instead so as to not need a whole mysql install. But it occurred to me that as I've got all these blank user fields for phone numbers I could populate with any data, that I could use something like php's Net_Finger. Which failed for me with: sudo pear install Net_Finger Starting to download Net_Finger-1.0.1.tgz (1,618 bytes) ....done: 1,618 bytes could not extract the package.xml file from "/build/buildd/php5-5.5.9+dfsg/pear-build-download/Net_Finger-1.0.1.tgz" Download of "pear/Net_Finger" succeeded, but it is not a valid package archive Error: cannot download "pear/Net_Finger" At which point I thought I'd stop, and take a ServerFault reality check - is this a really bad/dangerous/stupid idea just to stop me having to maintain details in two places rather than one? It there a better way? Googling shows that it's not an oft-asked thing, so perhaps with good reason?

    Read the article

  • visually documenting web server configuration and infrastructure

    - by Alex Ciarlillo
    I have just finished a large re-organization and update of our institutions web server(s). This server hosts 3 virtual hosts, 3-4 blogs, 2 wikis, some legacy static HTML pages, and many hosted documents (PDF, .jpg, .xls). I have organized the site into a structure of something like: /var/www/sites/vhost1, vhost2, vhost3 .../wordpress/blogX .../mediawiki/wikiX Data is in a seperate directory structure so I can run a cron task over it to make sure it is all writeable and such. I then symlink to these data directories for each application. /var/www/data/vhost1, vhost2, vhost3 .../wordpress/blogX/uploads .../mediawiki/wikiX/images All Apache configs are in /etc/httpd/conf.d/vhosts.d/vhost1,2,3.conf On top of this there is also a testing server which mirrors this setup. Once changes are fully tested, they are rsynced down to the live server. All the wordpress installs and mediawiki installs are straight form SVN and updates are done by switching branches or "svn up". So my question is how can I best document to share with a) co-workers, b) possible future replacement, c) myself 6 months from now. Obviously I can make a wiki page, excel document, whatever and fill it with text, but I am looking for a more visual representation that I can use to explain the architecture to less-technical people. Ideally it would be awesome if this visual representation could then be expanded to get more technical details.

    Read the article

  • Linux Server partitioning

    - by user1717735
    There's a lot of infos about this out there, but there's also a lot of contradictory infos… That's why i need some advices about it. So far, on the servers i had home for test (or even "home production") purposes i didn't really care about partitioning and i configured all in / + a swap partition, over RAID 0. Nevertheless, this pattern can't apply to production servers. I have found a good starting point here, but also it depends on what the servers will be used for… So basically, i have a server on which there will be apache, php, mysql. It will have to handle file uploads (up to 2GB) and has 2*2TB hard drive. I plan to set : / 100GB, /var 1000GB (apache files and mysql files will be here), /tmp 800GB (handles the php tmp file) /home 96GB swap 4GB All of this if of course over RAID 1. But actually, it's not a big deal if I lose data being uploaded, so would it be interesting mounting /tmp over raid 0 while maintaining the rest over raid 1? Sounds complicated…

    Read the article

  • Oddly placed CSS

    - by user3682473
    I want my news content to be completely centered (including image and text), but instead, it's oddly placed to the right like this: http://prntscr.com/3o7tjc I tried most ways to fix it and i can't find it... um.. here is the HTML part: <div id="mainContentContainer"> <div id="mainContent"> <div class="postTitle"> test </div> <div class="posterInfo"> <img width="40%" class="profilePic" src="/site/uploads/avatars/f3780c97491dd9f62f0dd7b1b8bb090a0b9e87d0.png"> <p>Posted by: <a class="postedBy" href="#">test</a></p> </div> <div class="postContent"> <div class="postImageContainer" align="center"> <img class="postImage" src="../uploads/img/test"> </div> <div class="post"> <p>test</p> </div> Comments have been disabled for this post.</div> </div> <div id="sidebar"> Welcome, Admin<br><a href="logout.php">Logout</a><br></div> </div> </div> annnd, here is CSS. body { margin: 0px; background-color: #6C9DDF; background-image:url("/assets/img/background.png"); background-repeat: no-repeat; } .hq { position:relative; top:40px; width:1300px; height:100%; left:1%; } #logo { position:absolute; width:40%; height:30%; right:30%; z-index: 100; } #homebtn, #playbtn, #newsbtn, #helpbtn { background: url(/assets/img/menubtns.png) no-repeat; } #homebtn { background-image: url("/assets/img/home.png"); background-repeat:no-repeat; background-size: 75%; width: 204px; height: 184px; position: absolute; top: 318px; left: 353px; } #homebtn:hover { background-image: url("/assets/img/home-rollover.png"); } #playbtn { background-image: url("/assets/img/play.png"); background-repeat:no-repeat; background-size: 100%; width: 200px; height: 230px; position: absolute; top: 240px; left: 480px; } #playbtn:hover { background-image: url("/assets/img/play-rollover.png"); } #newsbtn { background-image: url("/assets/img/news.png"); background-repeat:no-repeat; background-size: 100%; width: 290px; height: 290px; position: absolute; top: 210px; left: 650px; } #newsbtn:hover { background-image: url("/assets/img/news-rollover.png"); } #helpbtn { background-image: url("/assets/img/help.png"); background-repeat:no-repeat; background-size: 100%; width: 330px; height: 380px; position: absolute; top: 180px; left: 930px; } #helpbtn:hover { background-image: url("/assets/img/help-rollover.png"); } #mainContentContainer { border-radius: 30px 30px 30px 30px; -moz-border-radius: 30px 30px 30px 30px; -webkit-border-radius: 30px 30px 30px 30px; border: 8px solid #000000; background-color: #FFE12F; position: relative; width: 1200px; top: 60px; left: 8%; padding: 50px; overflow: hidden; height: 100%; position: relative } #mainContent { position: relative; border-radius: 30px 30px 30px 30px; -moz-border-radius: 30px 30px 30px 30px; -webkit-border-radius: 30px 30px 30px 30px; border: 0px solid #000000; background-color: #ffffff; height: 100%; padding: 20px; float: left; width: 900px; } #sidebar { position: relative; border-radius: 30px 30px 30px 30px; -moz-border-radius: 30px 30px 30px 30px; -webkit-border-radius: 30px 30px 30px 30px; border: 0px solid #000000; background-color: #ffffff; height: 100%; padding: 20px; float: right; width: 200px; } .postTitle { font-size: 40px; font-weight: bold; color: #515151; text-align: center; } .text { text-align: center; } .title { text-decoration: none; color: #515151; } .title:visited { text-decoration: none; color: #515151; } .title:hover { text-decoration: underline; } .postedBy { text-decoration: none; } .posterInfo { float: left; padding: 5px; } .postContent { overflow: hidden; } .postImageContainer { padding: 5px; } .postImage { width: 100%; position:relative; } .profilePic { border-radius: 30px 30px 30px 30px; -moz-border-radius: 30px 30px 30px 30px; -webkit-border-radius: 30px 30px 30px 30px; border: 0px solid #000000; } .registerFormWrapper { float: left; width: 50%; } .commentFormContainer { margin-top: 45px; } .commentContent { border-radius: 30px; overflow: auto; resize: none; padding: 10px; outline: none; } .commentBTN { background: url("../img/comment.png"); width: 269px; height: 260px; border: none; position: relative; top: -50px; cursor: pointer; text-indent: -999px; } .commentBTN:hover { background: url("../img/commentHover.png"); } .ToonName { font-weight: bold; font-size: 20px; } .ToonNameInput { border-radius: 30px; padding: 5px; outline: none; } .commentBTNS { outline: none; } .commentFormInputContainer { width: 60%; float: left; } .registerInput { border-radius: 30px; padding: 5px; outline: none; } .loginInput { border-radius: 30px; padding: 5px; outline: none; } .inputLabel { display: inline-block; float: left; width: 200px; font-size: 20px; font-weight: bold; } I tried changing most possible combinations, and it didnt work exactly... Here is the fiddle - http://jsfiddle.net/2EYYC/

    Read the article

  • Windows Phone 7 Prototype 001: Speech Recognition on WP7

    At some point in the future it will be awesome when you can just tell your computer what to do and it does it - without typing to help those of us with a blistering 11 WPM hunk and peck technique. Siri, a mobile digital assistant using speech recognition was voted best tech at SXSW. I dont know about that one. Although, I'm sure it will get better when Apple rebuilds it and  bundles on iPhone 5. So how would you do that on WP7? There have been some videos floating around showing Bing with some voice control so obviously the phone has speech recognition. So what options are there: System.Speech? Not included in WP7/SL Nuance software like Siri? No WP7/SL version yet. Invoking the SAPI dlls on the phone? No automation factory in WP7 SL. Web services using System.Speech and mic on the phone? YES! The last one was my least favorite but that works for now. I built a quick sample app to show how to do text-to-speech and speech recognition on WP7.   @eklimczak will not be happy with the developer designed UI. In this sample there is web service with provides access to the system.speech APIs in .NET. Basically its just passing around byte arrays. On the phone its using the XNA audio frameworks to play the text-to-speech stream and to record using the microphone. The code is pretty simple and you can download from the link at the end of this post. The only things to note are adjusting the WCF config to handle larger byte uploads and the Microphone API is a little weird with that 1 second buffer. It would be nice if you could just to mic.start and mic.end which would return an array of bytes instead of managing your own stream inside the buffer ready callback. Couple of downsides to this approach: Recoding from the phone has some static. Could be my code or the my mic is bad / not calibrated right. Having to make web service calls instead of local access is not ideal (Microsoft, please add an API for the SAPI dlls) Although in the context of an app like Siri its not so bad since you need to do web service lookups to get data back Speech recognition quality really depends on either a) a limited grammar set like that pizza grammar in the sample or b) training the recognizer. For the latter it would be annoying to have users train the system. Using the System.Speech stuff youd have to have a profile for each user. So until Microsoft adds some speech client APIs on the phone or Nuance releases a wp7 product, this is a decent workaround. In the future Id like to build something similar to Siri. I shall call it Iris in homage. Im a big fan of mobile speech apps because frankly its just not safe to Google while driving. Since some of my designer co-workers have been posting UI sketches for WP7, Id like to start posting some code prototypes for things I try out on the phone. That will probably last 2 weeks, but for the moment I have like 10 posts in the queue. Sample Code 100% guaranteed to work on my emulatorDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 1 of 2

    - by Kelly Jones
    Last year, I had the opportunity to build a solution that involved integrating a Windows Form application into a SharePoint 2007 (WSS version 3.0). In this post, I’ll layout our architecture thinking and in part two, I’ll describe the technical details. Business Case Our challenge was this: we needed an easy way for a small group of our users to upload documents, in batches.  They also needed to quickly set the meta data values, as well as set security on individual files. Using the out of the box uploads just didn’t fit.  The single file upload allows set the meta data, but our users would be uploading dozens of files.  The multiple upload would allow our users to upload batches of files, but it doesn’t allow them to set the meta data during upload.  Also, neither upload method allows the users to set the permissions on the file. Our Solution We looked into building a web control of some kind, but ruled that out due to security complexities (if I remember correctly).  Another option would have been using a technology like Silverlight (or Flash?), but our team didn’t have the skills necessary to build with these. So, after looking at what was technically possible, and also what skills our team had, we settled on a Windows Form application.  We also decided to deliver it to the clients via Click Once, so we would have the ability to easily update the application in the future. Lessons Learned After deploying our solution, we’ve learned a few lessons.  First, you’ll need to have the .Net Framework installed on the client computers.  We knew this, but we still ran into issues making sure our users had the proper framework version installed.  Second, we had issues with authentication.  Our issues were due to our testing domain being a separate Active Directory domain from the domain that our end users and their workstations were members of.  (See my earlier post about Clearing Saved Passwords for the fix to our problem). Our third issue was how we dealt with uploading files that were named the same.  Our application would replace the existing file with the new file, which is the way we expected it to work.  However, our users wanted to upload weekly reports, named the same as the previous week.  We solved this by using folders within the document library to keep the sets of reports separate from previous weeks. One last thing to consider before implementing a solution like this, is what browsers and platforms your users will be working from.  We only needed to support IE and Windows, which works fine.  However, if you need to support Firefox, there are add-ons that allow Click Once to work with Firefox.  This is still a Windows only solution though.  In order to support Macs, you’d have to focus on either browser techniques (AJAX?) or Silverlight/Flash. Summary Our users are happy with the Click Once app.  It allowed them to move all of their content to our SharePoint site in under a couple hours, which they were thrilled with.  We’re happy because we can easily deploy updates, our development time was small, and we met all of our business requirements.

    Read the article

  • Install Oracle Configuration Manager's Standalone Collector

    - by Get Proactive Customer Adoption Team
    Untitled Document The Why and the How If you have heard of Oracle Configuration Manager (OCM), but haven’t installed it, I’m guessing this is for one of two reasons. Either you don’t know how it helps you or you don’t know how to install it. I’ll address both of those reasons today. First, let’s take a quick look at how My Oracle Support and the Oracle Configuration Manager work together to gain a good understanding of what their differences and roles are before we tackle the install.   Oracle Configuration Manger is the tool that actually performs the data collection task. You deploy this lightweight piece of software into your system to collect configuration information about the system and OCM uploads that data to Oracle’s customer configuration repository. Oracle Support Engineers then have the configuration data available when you file a service request. You can also view the data through My Oracle Support. The real value is that the data Oracle Configuration Manager collects can help you avoid problems and get your Service Requests solved more quickly. When you view the information in My Oracle Support’s user interface to OCM, it may help you avoid situations that create problems. The proactive tools included in Oracle Configuration Manager help you avoid issues before they occur. You also save time because you didn’t need to open a service request. For example, you can use this capability when you need to compare your system configuration at two points in time, or monitor the system health. If you make the configuration data available to Oracle Support Engineers, when you need to open a Service Request the data helps them diagnose and resolve your critical system issues more quickly, which means you get answers more quickly too. Quick Installation Process Overview Before we dive into the step-by-step details, let me provide a quick overview. For some of you, this will be all you need. Log in to My Oracle Support and download the data collector from Collector tab. If you don’t see the Collector tab, click the More tab gain access. On the Collector tab, you will find a drop-down list showing which platforms are available. You can also see more ways to the Collector can help you if you click through the carousel of benefits. After you download the software for your platform, use FTP to move that file (.zip) from your PC to the server that hosts the Oracle software. Once you have that file on the server, locate the $ORACLE_HOME directory, and unzip the file within that directory. You can then use the command line tool to start the installation process. The installation process requires the My Oracle Support credential (Support Identifier, username, and password) Proxy specification (Host IP Address, Port number, username and password) Installation Step-by-Step Download the collector zip file from My Oracle Support and place it into your $Oracle_Home Unzip the zip file you downloaded from My Oracle Support – this will create a directory named CCR with several subdirectories Using the command line go to “$ORACLE_HOME/CCR/bin” and run the following command “setupCCR” Provide your My Oracle Support credential: login, password, and Support Identifier The installer will start deploying the collector application You have installed the Collector Post Installation Now that you have installed successfully, the scheduler is ready to collect configuration information for the software available in your Oracle Home. By default, the first collection will take place the day after the installation. If you want to run an instrumentation script to start the configuration collection of your Oracle Database server, E-Business Suite, or Enterprise Manager, you will find more details on that in the Installation and Administration Guide for My Oracle Support Configuration Manager. Related documents available on My Oracle Support Oracle Configuration Manager Installation and Administration Guide [ID 728989.5] Oracle Configuration Manager Prerequisites [ID 728473.5] Oracle Configuration Manager Network Connectivity Test [ID 728970.5] Oracle Configuration Manager Collection Overview [ID 728985.5] Oracle Configuration Manager Security Overview [ID 728982.5] Oracle Software Configuration Manager: Disconnected Mode Collection [ID 453412.1]

    Read the article

  • Get Exchange Online Mailbox Size in GB

    - by Brian Jackett
    As mentioned in my previous post I was recently working with a customer to get started with Exchange Online PowerShell commandlets.  In this post I wanted to follow up and show one example of a difference in output from commandlets in Exchange 2010 on-premises vs. Exchange Online.   Problem    The customer was interested in getting the size of mailboxes in GB.  For Exchange on-premises this is fairly easy.  A fellow PFE Gary Siepser wrote an article explaining how to accomplish this (click here).  Note that Gary’s script will not work when remoting from a local machine that doesn’t have the Exchange object model installed.  A similar type of scenario exists if you are executing PowerShell against Exchange Online.  The data type for TotalItemSize  being returned (ByteQuantifiedSize) exists in the Exchange namespace.  If the PowerShell session doesn’t have access to that namespace (or hasn’t loaded it) PowerShell works with an approximation of that data type.    The customer found a sample script on this TechNet article that they attempted to use (minor edits by me to fit on page and remove references to deleted item size.)   Get-Mailbox -ResultSize Unlimited | Get-MailboxStatistics | Select DisplayName,StorageLimitStatus, ` @{name="TotalItemSize (MB)"; expression={[math]::Round( ` ($_.TotalItemSize.Split("(")[1].Split(" ")[0].Replace(",","")/1MB),2)}}, ` ItemCount | Sort "TotalItemSize (MB)" -Descending | Export-CSV "C:\My Documents\All Mailboxes.csv" -NoTypeInformation     The script is targeted to Exchange 2010 but fails for Exchange Online.  In Exchange Online when referencing the TotalItemSize property though it does not have a Split method which ultimately causes the script to fail.   Solution    A simple solution would be to add a call to the ToString method off of the TotalItemSize property (in bold on line 5 below).   Get-Mailbox -ResultSize Unlimited | Get-MailboxStatistics | Select DisplayName,StorageLimitStatus, ` @{name="TotalItemSize (MB)"; expression={[math]::Round( ` ($_.TotalItemSize.ToString().Split("(")[1].Split(" ")[0].Replace(",","")/1MB),2)}}, ` ItemCount | Sort "TotalItemSize (MB)" -Descending | Export-CSV "C:\My Documents\All Mailboxes.csv" -NoTypeInformation      This fixes the script to run but the numerous string replacements and splits are an eye sore to me.  I attempted to simplify the string manipulation with a regular expression (more info on regular expressions in PowerShell click here).  The result is a workable script that does one nice feature of adding a new member to the mailbox statistics called TotalItemSizeInBytes.  With this member you can then convert into any byte level (KB, MB, GB, etc.) that suits your needs.  You can download the full version of this script below (includes commands to connect to Exchange Online session). $UserMailboxStats = Get-Mailbox -RecipientTypeDetails UserMailbox ` -ResultSize Unlimited | Get-MailboxStatistics $UserMailboxStats | Add-Member -MemberType ScriptProperty -Name TotalItemSizeInBytes ` -Value {$this.TotalItemSize -replace "(.*\()|,| [a-z]*\)", ""} $UserMailboxStats | Select-Object DisplayName,@{Name="TotalItemSize (GB)"; ` Expression={[math]::Round($_.TotalItemSizeInBytes/1GB,2)}}   Conclusion    Moving from on-premises to the cloud with PowerShell (and PowerShell remoting in general) can sometimes present some new challenges due to what you have access to.  This means that you must always test your code / scripts.  I still believe that not having to physically RDP to a server is a huge gain over some of the small hurdles you may encounter during the transition.  Scripting is the future of administration and makes you more valuable.  Hopefully this script and the concepts presented help you be a better admin / developer.         -Frog Out     Links The Get-MailboxStatistics Cmdlet, the TotalitemSize Property, and that pesky little “b” http://blogs.technet.com/b/gary/archive/2010/02/20/the-get-mailboxstatistics-cmdlet-the-totalitemsize-property-and-that-pesky-little-b.aspx   View Mailbox Sizes and Mailbox Quotas Using Windows PowerShell http://technet.microsoft.com/en-us/exchangelabshelp/gg576861#ViewAllMailboxes   Regular Expressions with Windows PowerShell http://www.regular-expressions.info/powershell.html   “I don’t always test my code…” image http://blogs.pinkelephant.com/images/uploads/conferences/I-dont-always-test-my-code-But-when-I-do-I-do-it-in-production.jpg   The One Thing: Brian Jackett and SharePoint 2010 http://www.youtube.com/watch?v=Sg_h66HMP9o

    Read the article

  • Antenna Aligner Part 7: Connecting the dots

    - by Chris George
    The app is basically ready, so I eagerly started to sort out creating the application entry in iTunes Connect. It's mostly intuitive actually, although I did have to create yet another icon for iTunes sized 512x512 pixels, damn lucky I did the original graphics as vector! It took me longer to write the application description than anything else, I'm so not a tech author! I didn't like the way you have to 'make up' an SKU (Stock Keeping Unit) number. I have to do some googling to find out that it really doesn't matter what it is! It should be more obvious what to do from the actual website itself. That aside, the rest of it was actually fairly straightforward. As well as the details of the application, iPhone and iPad screenshots were also required. This posed somewhat of a problem. The iPhone ones were easy (as I have one!), but I do not (yet) own an iPad . So I thought I'd leave the iPad screenshots out for now. Once the application details were sorted, I moved onto the rights and pricing. At the start of the project I had made the decision that I wouldn't charge any more than the lowest amount £0.59. I believe there is a market for this, but as my first foray into app development I didn't want to take the mick. I did realise, however, that I had built my app with a developer certificate and provisioning profile. This was fairly quickly corrected, and again Nomad made this very easy to switch over to the distribution certificate and provisioning profile. With a sense of excitement I cracked open iTunes connect and clicked the upload button ... ...slight snag... . when the Nomad project was started, Apple allowed uploads of these binaries via iTunes Connect. But this is no longer possible, the only upload path is via the Application Loader available from the Apple Developer program. This itself has one limitation, it only runs on a mac! D'OH!!!  Actually my language was somewhat more colourful when this fact came to light. After picking my laptop up off the floor and putting it back together... ok only joking, but I did nearly throw it out of frustration!... I started to consider the options; I briefly entertained the idea of buying a cheap mac from ebay... no, that defeats the whole object of what I'm doing, plus my wife wouldn't be impressed there are some guys out there in the interweb who will upload your app for a small fee...but I don't really like the idea of giving some faceless email address my apple developer login details, as well as my app binary! find some willing friend with a mac who would kindly let me use it... obviously this is the only sensible option. In the meantime, I informed the Nomad team about this slight 'issue' and they are currently investigating possible solutions...

    Read the article

  • Open a File Browser From Your Current Command Prompt/Terminal Directory

    - by The Geek
    Ever been doing some work at the command line when you realized… it would be a lot easier if I could just use the mouse for this task? One command later, you’ll have a window open to the same place that you’re at. This same tip works in more than one operating system, so we’ll detail how to do it in every way we know how. Open a File Browser in Windows We’ve actually covered this before when we told you how to open an Explorer window from the command prompt’s current directory, but we’ll briefly review: Just type the follow command into your command prompt: explorer . Note: You could actually just type “start .” instead. And you’ll then see a file browsing window set to the same directory you were previous at. And yes, this screenshot is from Vista, but it works the same in every version of Windows. If that wasn’t good enough, you should really read how you can navigate in the File Open/Save dialogs with just the keyboard—now that’s a Stupid Geek Trick! Open a File Browser in Linux For this exercise, we’re going to assume that you’re using Gnome under a Linux flavor like Ubuntu, because that’s the most common. From your terminal window, just type in the following command: nautilus . And the next thing you know, you’ll have a file browser window open at the current location. You’ll see some type of error message at the prompt, but you can pretty much ignore that. You can also use “gnome-open .” if you want. Open Finder in Mac OS X All the Mac computers in this office are running Linux, so we haven’t had a chance to verify, but you should be able to use the following command on OS X to open Finder in the current terminal location: open . Open Dolphin on Linux KDE4 dolphin . Got any extra tips to help out your fellow readers? How do you do the same thing in KDE3? What about OS X? Leave your savvy advice in the comments, and maybe we’ll update the article. Or not. Either way, it’ll help somebody! Similar Articles Productive Geek Tips Keyboard Ninja: Concatenate Multiple Text Files in WindowsStupid Geek Tricks: Open an Explorer Window from the Command Prompt’s Current DirectoryHow to automate FTP uploads from the Windows Command LineShell Geek: Rename Multiple Files At OnceAdd "Open with gedit" to the right click menu in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • Antenna Aligner Part 7: Connecting the dots

    - by Chris George
    The app is basically ready, so I eagerly started to sort out creating the application entry in iTunes Connect. It's mostly intuitive actually, although I did have to create yet another icon for iTunes sized 512x512 pixels, damn lucky I did the original graphics as vector! It took me longer to write the application description than anything else, I'm so not a tech author! I didn't like the way you have to 'make up' an SKU (Stock Keeping Unit) number. I have to do some googling to find out that it really doesn't matter what it is! It should be more obvious what to do from the actual website itself. That aside, the rest of it was actually fairly straightforward. As well as the details of the application, iPhone and iPad screenshots were also required. This posed somewhat of a problem. The iPhone ones were easy (as I have one!), but I do not (yet) own an iPad . So I thought I'd leave the iPad screenshots out for now. Once the application details were sorted, I moved onto the rights and pricing. At the start of the project I had made the decision that I wouldn't charge any more than the lowest amount £0.59. I believe there is a market for this, but as my first foray into app development I didn't want to take the mick. I did realise, however, that I had built my app with a developer certificate and provisioning profile. This was fairly quickly corrected, and again Nomad made this very easy to switch over to the distribution certificate and provisioning profile. With a sense of excitement I cracked open iTunes connect and clicked the upload button ... ...slight snag... . when the Nomad project was started, Apple allowed uploads of these binaries via iTunes Connect. But this is no longer possible, the only upload path is via the Application Loader available from the Apple Developer program. This itself has one limitation, it only runs on a mac! D'OH!!!  Actually my language was somewhat more colourful when this fact came to light. After picking my laptop up off the floor and putting it back together... ok only joking, but I did nearly throw it out of frustration!... I started to consider the options; I briefly entertained the idea of buying a cheap mac from ebay... no, that defeats the whole object of what I'm doing, plus my wife wouldn't be impressed there are some guys out there in the interweb who will upload your app for a small fee...but I don't really like the idea of giving some faceless email address my apple developer login details, as well as my app binary! find some willing friend with a mac who would kindly let me use it... obviously this is the only sensible option. In the meantime, I informed the Nomad team about this slight 'issue' and they are currently investigating possible solutions...

    Read the article

  • Test your internet connection - Emtel Fixed Broadband

    Already at the begin of April, I had a phone conversation with my representative at Emtel Ltd. about some upcoming issues due to the ongoing construction work in my neighbourhood. Unfortunately, they finally raised the house two levels above ours, and of course this has to have a negative impact on the visibility between the WiMAX outdoor unit on the roof and the aimed access point at Medine. So, today I had a technical team here to do a site survey and to come up with potential solutions. Short version: It doesn't look good after all. The site survey Well, the two technicians did their work properly, even re-arranged the antenna to check the connection with another end point down at La Preneuse. But no improvements. Looks like we are out of luck since the construction next door hasn't finished yet and at the moment, it even looks like they are planning to put at least one more level on top. I really wonder about the sanity of the responsible bodies at the local district council. But that's another story. Anyway, the outdoor unit was once again pointed towards Medine and properly fixed with new cable guides (air from the sea and rust...). Both of them did a good job and fine-tuned the reception signal to a mere 3 over 9; compared to the original 7 over 9 I had before the daily terror started. The site survey has been done, and now it's up to Emtel to come up with (better) solutions. Well, I wouldn't mind to have an unlimited, symmetric 3G/UMTS or even LTE connection. Let's see what they can do... Testing the connection There are several online sites available which offer you to check certain aspects of your internet connection. Personally, I'm used to speedtest.net and it works very well. I think it is good and necessary to check your connection from time to time, and only a couple of days ago, I posted the following on Emtel's wall at Facebook (21.05.2013 - 14:06 hrs): Dear Emtel, could you eventually provide an answer on the miserable results of SpeedTest? I chose Rose Hill (Hosted by Emtel Ltd.) as testing endpoint... Sadly, no response to this. Seems that the marketing department is not willing to deal with customers on Facebook. Okay, over at speedtest.net you can use their Flash-based test suite to check your connection to quite a number of servers of different providers world-wide. It's actually very interesting to see the results for different end points and to compare them to each other. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 30.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Fixed Broadband) Speedtest.net result of 30.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Fixed Broadband) Luckily, the results are quite similar in terms of connection speed; which is good. I'm currently on a WiMAX tariff called 'Classic Browsing 2', or Fixed Broadband as they call it now, which provides a symmetric line of 768 Kbps (or roughly 0.75 Mbps). In terms of downloads or uploads this means that I would be able to transfer files in either direction with approximately 96 KB/s. Frankly speaking, thanks to compression, my choice of browser and operating system I usually exceed this value and I have download rates up to 120 KB/s - not too bad after all. Only the ping times are a little bit of concern. Due to the difference in distance, or better said based on the number of hubs between the endpoints, they indicate the amount of time that it takes to send a package from your machine to the remote server and get a response back. A lower value is better, and usually the ping is less than 300 ms between Mauritius and Europe. The alternatives in Mauritius Not sure whether I should note this done because for my requirements there are no alternatives to Emtel WiMAX at the moment. It would be great to have your opinion on the situation of internet connectivity in Mauritius. Are there really alternatives? And if so, what are the conditions?

    Read the article

  • Impersonation in ASP.NET MVC

    - by eibrahim
    I have an Action that needs to read a file from a secure location, so I have to use impersonation to read the file. This code WORKS: [AcceptVerbs(HttpVerbs.Get)] public ActionResult DirectDownload(Guid id) { if (Impersonator.ImpersonateValidUser()) { try { var path = "path to file"; if (!System.IO.File.Exists(path)) { return View("filenotfound"); } var bytes = System.IO.File.ReadAllBytes(path); return File(bytes, "application/octet-stream", "FileName"); } catch (Exception e) { Log.Exception(e); }finally { Impersonator.UndoImpersonation(); } } return View("filenotfound"); } The only problem with the above code is that I have to read the entire file into memory and I am going to be dealing with VERY large files, so this is not a good solution. But if I replace these 2 lines: var bytes = System.IO.File.ReadAllBytes(path); return File(bytes, "application/octet-stream", "FileName"); with this: return File(path, "application/octet-stream", "FileName"); It does NOT work and I get the error message: Access to the path 'c:\projects\uploads\1\aa2bcbe7-ea99-499d-add8-c1fdac561b0e\Untitled 2.csv' is denied. I guess using the File results with a path, tries to open the file at a later time in the request pipeline when I have already "undone" the impersonation. Remember, the impersonation code works because I can read the file in the bytes array. What I want to do though is stream the file to the client. Any idea how I can work around this? Thanks in advance.

    Read the article

  • viewDidLoad not being called by parent UITabBarController

    - by Adam Bishop
    Sample: I've created a minimal set of files that highlight the issue here: http://uploads.omega.org.uk/Foo3.zip If viewDidLoad/viewInitWithNibName are called, a message box is displayed. The message box is not displayed, therefore, the methods are not being called. Details: I have an application that is attempting to use a UITabBarController to switch between multiple views. The views are linked up to the UITabBarController using interface builder (select the tab page, open Attributes (Option-1), and fill in the NIB Name field), and so are displayed "automatically" with no extra code-behind to make them appear. Is it intended behaviour that views loaded like this do not have their viewDidLoad method executed? If not, how am I doing it wrong, and what do I need to change. If it is intended behaviour, I can think of a few work-arounds, but any suggestions are appreciated: Scrap the UITabBarController and implement the view switching myself (using initWithNibName and add/insert/push/Subview). Call each of the children's viewDidLoad method manually in the UITabBarController's own viewDidLoad method. Thank you in advance for any help you can offer.

    Read the article

  • How do I use XML prefixes in C#?

    - by Andrew Mock
    EDIT: I have now published my app: http://pastebin.com/PYAxaTHU I was trying to make console-based application that returns my temperature. using System; using System.Xml; namespace GetTemp { class Program { static void Main(string[] args) { XmlDocument doc = new XmlDocument(); doc.LoadXml(downloadWebPage( "http://www.andrewmock.com/uploads/example.xml" )); XmlNamespaceManager man = new XmlNamespaceManager(doc.NameTable); man.AddNamespace("aws", "www.aws.com/aws"); XmlNode weather = doc.SelectSingleNode("aws:weather", man); Console.WriteLine(weather.InnerText); Console.ReadKey(false); } } } Here is the sample XML: <aws:weather xmlns:aws="http://www.aws.com/aws"> <aws:api version="2.0"/> <aws:WebURL>http://weather.weatherbug.com/WA/Kenmore-weather.html?ZCode=Z5546&Units=0&stat=BOTHL</aws:WebURL> <aws:InputLocationURL>http://weather.weatherbug.com/WA/Kenmore-weather.html?ZCode=Z5546&Units=0</aws:InputLocationURL> <aws:station requestedID="BOTHL" id="BOTHL" name="Moorlands ES" city="Kenmore" state=" WA" zipcode="98028" country="USA" latitude="47.7383346557617" longitude="-122.230278015137"/> <aws:current-condition icon="http://deskwx.weatherbug.com/images/Forecast/icons/cond024.gif">Mostly Cloudy</aws:current-condition> <aws:temp units="&deg;F">40.2</aws:temp> <aws:rain-today units=""">0</aws:rain-today> <aws:wind-speed units="mph">0</aws:wind-speed> <aws:wind-direction>WNW</aws:wind-direction> <aws:gust-speed units="mph">5</aws:gust-speed> <aws:gust-direction>NW</aws:gust-direction> </aws:weather> I'm just not sure how to use XML prefixes correctly here. What is wrong with this?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >