Search Results

Search found 19097 results on 764 pages for 'form lifecycle'.

Page 706/764 | < Previous Page | 702 703 704 705 706 707 708 709 710 711 712 713  | Next Page >

  • How to unmangle PDF format into a usable text or spreadsheet document?

    - by Chuck
    Upon requesting some daily/hourly sales data from a coworker who is responsible for such requests, I was given a series of PDF files. The point of sale program that is used, for some reason, answers requests for this type of information in the form of PDF files. The issue: The PDF files look to be in a format that should easily be copy and pasted into a spreadsheet. There are three columns that look to be neatly organized across two pages. When copy/pasting the first page, all three columns from the PDF's first page are dumped into a single column consisting of the Date followed by the Hours for the transactions on that day. The end of this Date/Time information is followed by all of the Total Sales values that should be attached a Date and Time of the transaction. (NOTE: There are no duplicated Dates in the Date column, ie, Multiple transactions for a day only have one yyyy/mm/dd listed for the first row but not the following rows.) While it was a huge pain, it was possible to, in about four or five steps, get the single column of data broken out into three columns that matched the PDF. The second page of the PDF file, when attempting to copy/paste into a spreadsheet, creates a single column with the first third of the cells being the Dates from the PDF, the second third of the cells being the Hours of the transactions and the final third of the cells being filled with the Total Sales. After the copy/paste there is no way to figure out which Hours belong to which Dates or Total Sales due to the lack of the duplicated Dates in the Date column as mentioned above. My PDF-fu is next to non-existent. I've just now started to work with PDF editors and some www.convertmyPDFforfree.com websites, so far, with absolutely nothing remotely coming anywhere near usable output. (Both methods have so far done nothing but product blank documents.) Before I go back and pester my co-worker into figuring out a way to create a report in some other format than PDF, is there any method by which to take the data that looks to be formatted correctly in a PDF and copy/paste it into a spreadsheet that will look the same? I appreciate any help that can be made available. The sales data isn't so sensitive that I couldn't part with a bit to let somebody actually see what it is that needs to be dealt with, just let me know. The PDF's are less than 100kb each so sending them shouldn't be a burden to any interested party.

    Read the article

  • Remote access to internal machine (ssh port-forwarding)

    - by MacUsers
    I have a server (serv05) at work with a public ip, hosting two KVM guests - vtest1 & vtest2 - in two different private network - 192.168.122.0 & 192.168.100.0 - respectively, this way: [root@serv05 ~]# ip -o addr show | grep -w inet 1: lo inet 127.0.0.1/8 scope host lo 2: eth0 inet xxx.xxx.xx.197/24 brd xxx.xxx.xx.255 scope global eth0 4: virbr1 inet 192.168.100.1/24 brd 192.168.100.255 scope global virbr1 6: virbr0 inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 # [root@serv05 ~]# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr1 xxx.xxx.xx.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 0.0.0.0 xxx.xxx.xx.62 0.0.0.0 UG 0 0 0 eth0 I've also setup IP FORWARDing and Masquerading this way: iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE iptables --append FORWARD --in-interface virbr0 -j ACCEPT All works up to this point. If I want to remote access vtest1 (or vtest2) first I ssh to serv05 and then from there ssh to vtest1. Is there a way to setup a port forwarding so that vtest1 can be accessed directly from the outside world? This is what I probably need to setup: external_ip (tcp port 4444) -> DNAT -> 192.168.122.50 (tcp port 22) I know it's easily do'able using a SOHO router but can't figure out how can I do that on a Linux box. Any help form you guys?? Cheers!! Update: 1 Now I've made ssh to listen to both of the ports: [root@serv05 ssh]# netstat -tulpn | grep ssh tcp 0 0 xxx.xxx.xx.197:22 0.0.0.0:* LISTEN 5092/sshd tcp 0 0 xxx.xxx.xx.197:4444 0.0.0.0:* LISTEN 5092/sshd and port 4444 is allowed in the iptables rules: [root@serv05 sysconfig]# grep 4444 iptables -A PREROUTING -i eth0 -p tcp -m tcp --dport 4444 -j DNAT --to-destination 192.168.122.50:22 -A INPUT -p tcp -m state --state NEW -m tcp --dport 4444 -j ACCEPT -A FORWARD -i eth0 -p tcp -m tcp --dport 4444 -j ACCEPT But I'm getting connection refused: maci:~ santa$ telnet serv05 4444 Trying xxx.xxx.xx.197... telnet: connect to address xxx.xxx.xx.197: Connection refused telnet: Unable to connect to remote host Any idea what's I'm still missing? Cheers!!

    Read the article

  • My new hard drive doesn't have rights on my old one?

    - by Allan
    Until recently I had a 1 TB hard disk with Windows 7 on it, I have bought myself a SSD, removed the old harddisc and installed Windows 7 on my new one. After that I put back the old hard disk, and formatted it, now I could use that as backup and to keep files on. Nice, right? Well I was updating .Net framework through Windows update, when it stalled. I noticed some space was used on one of the drives on my secondary 'previously primary' hard disk. Apparently it was the .Net framework, trying to save some temporary files on my secondary disc, because it was the one with the most space. It was like it didn't get access. I cancled the installation and rebooted the computer. Now wanting to remove the temporary folder on my secondary harddisk. It told me. "You don't have access by SYSTEM", I don't understand, my user is administrator, its the only user there is and at the same time I can remove and delete any other folder on that drive. I'm gonna go a little pseudo here. But it feels as if the computer treats the old harddisk as protected from tampering by the new SSD. Also, I feel I should mention, they are both listed as primary, ... primary 0 and primary 1. Both using SATA cable. My old hard drive was partioned into 3 drives. 2 of them said the current owner was 'Administrator/myPCname' and the third one said the current owner was 'SYSTEM' I changed them all into the only one that I could pick from the list, which is my user since the 'Administrators/myPCname' wasn't exactly wrong.. could it be that they were somehow still attached to the old OS?.. the fact is I named my computer the exact same thing as it was called before installing a new windows.. so I can't really tell if its an old ownership or not. Also.. I'm currently logged in as 'myname' and I'm administrator.. now trying to delete the previously mentioned files.. it says 'you need access from 'myname' – and it can't delete.. That seems really messed up, I mean I'm logged in as the name it wants me to use. Is there maybe someway I could reset all the users on my computer? Or create some default? I don't know – I just want it to take a form I have always known, from a standard Windows point of view.

    Read the article

  • Need a script/batch/program that runs a command that won't be killed when the parent is killed

    - by billc.cn
    The scenario I use Zabbix to monitor my servers and recently I wanted to add some more metrics for the Windows ones. For security reasons, I used Zabbix's User Parameter feature, but it limits the execution of external commands to about 3 seconds. After that, the command is forcibly killed. I want to run some long run commands, so I used the trick from Zabbix's forum: run the command in the background, write the results to a file and use Zabbix to collect them. This is rather easy under *nix thanks to the "&" operator, but there is no such support in Windows' shell. To make things worse, when Zabbix kills forcibly kill the cmd.exe it used to evaluate the commands, all child processes die including the unfinished background tasks. Thus I need something that can sever all the ties with its children so they won't be affected in the cascading kill. What I've tried start and start /B - They do nothing as the child always die with the parent WScript.Shell.Run as in invis.vbs from StackOverflow - Sometimes work. If the wscript process is forcibly killed as opposed to quitting on its own, the children will die as well. hstart - similar results to invis.vbs At command - This requires you to set an absolution time for the task to run as opposed to an offset, so the code would be quite messy due to the limited shell scripting capability of Windows. (Edit) PsExec.exe from the SysInternals suite - It uses a service to launch the command, so it is not affected by the kill; however, it prints some banner and log info to StdErr and there's no switch to disable this. When I use 2>NUL to redirect them, Zabbix reports an error. After trying the above in different combinations, I noticed if I call hstart from invis.vbs, the command started by the former will be left alone as a parent-less process when invis.vbs is killed. However, since I need to redirect the output, the command I want to run is always in the form of cmd.exe /c ""command" "args"" >log. The vbs also removes all the quotes, so I have to encode the command with self-defined escape sequences. The end result involves about five levels of escaping/quoting, which is almost impossible to maintain. Anyone know any better solutions? Some requirements Any bat/vbs/js/Win32 binary is acceptable Better not require multiple levels of escaping No .Net (including PowerShell) because it is not installed

    Read the article

  • Add Your Own Domain to Your WordPress.com Blog

    - by Matthew Guay
    Now that you’ve got a nice blog on WordPress.com, why not get your own domain to brand your site?  Here’s how you can easily register a new domain or move your existing domain to your WordPress site. By default, your free WordPress address is yourblog’sname.wordpress.com.  But whether this is a personal or a company blog, it can be nice to have your own domain to really brand your site and make it your own.  Or, if you already have another website and want to use WordPress as a blog for it, you could even add blog.yoursite.com or any other subdomain. Adding a domain to your WordPress.com is a paid upgrade; registering and mapping a new domain to your account costs $14.97 a year, while mapping a domain you already own to your WordPress blog costs $9.97 a year. Getting Started Login to your blog’s dashboard, click the arrow beside Upgrades in the sidebar, and select Domains. Enter the domain or subdomain you want to add to your site in the text box, and click Add domain to blog.   If you entered a new domain you want to register, WordPress will make sure the domain is available and then present you a registration form to register the domain.  Enter your information, and then click Register Domain.   Or, if you enter a domain that’s already registered, you will see the following prompt. If this domain is a domain you own, you can map it to WordPress.com.  Login to your domain registrar account and switch your nameserver to: NS1.WORDPRESS.COM NS2.WORDPRESS.COM NS3.WORDPRESS.COM Your DNS settings page for your domain may be different, depending on your registrar.  Here’s how our domain settings looked. Alternately, if you’re wanting to map a subdomain, such as blog.yoursite.com to your WordPress blog, create the following CNAME record on your domain register.  You may have to contact your domain registrar’s support to do this.  Substitute your subdomain, domain, and blog name when creating the record. subdomain.yourdomain.com. IN CNAME yourblog.wordpress.com. Once your settings are correct, click Try Again in your WordPress dashboard.  The DNS settings may take a while to update, but once WordPress can tell your DNS settings point to it, you will see the following confirmation screen.  Click Map Domain to add this domain to your WordPress blog. Now you’re ready to pay for your domain mapping or registration.  Depending on your purchase, the information and price shown may be different.  Here we’re mapping a domain we already have registered, so it costs $9.97.  Select your method of payment, enter your payment information or signin with your Paypal account, and continue as usual. Once your purchase is finished, you’ll be returned to the Domains page on WordPress.  Try going to your new domain, and make sure it opens your blog.  If it works, then click the bullet beside the new domain, and click Update Primary Domain.  Now, when people visit your WordPress site, they’ll see your new domain in the address bar.  You can still access your blog from your old yourname.wordpress.com address, but it will redirect to you new domain. Conclusion Having a personalized domain is a great way to make your blog more professional, while still taking advantage of the ease of use that WordPress.com offers.  And, if you have your own domain, you can easily move to your site traffic to a different hosting provider in the future if you need to.  The process is slightly complicated, but for $15/year we found this one of the best upgrades you could do to your WordPress.com blog. If you want to see an example of a site created with Wordpress, check out Matthew’s tech site techinch.com. And, if you’re just getting started with WordPress, check out our series on how to Start your WordPress.com blog, Personalize it, and Easily Post Content to it from anywhere. Similar Articles Productive Geek Tips Add Social Bookmarking (Digg This!) Links to your Wordpress BlogHow-To Geek SoftwareHow To Start Your Own Professional Blog with WordPressDisable Logon to Windows Computers When Not Connected to a DomainMake a Backup Copy of your Production Wordpress Blog on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool Easily Create More Bookmark Toolbars in Firefox Filevo is a Cool File Hosting & Sharing Site Get a free copy of WinUtilities Pro 2010 World Cup Schedule

    Read the article

  • Configuring Novel iPrint client on ubuntu 13.10

    - by Mahdi Sadeghi
    Recently I have struggled a lot to make Novel iPrint client to work on my laptop. I need it to use Follow Me printers in our university(you can take your print form any printer). Using this tutorial from Novel, I tried to convert the rpm package and install it on Ubuntu 13.04 & 13.10. The post install script from installing generated deb package had a typo which I saw in post install messages and I fixed that. Now I have the client running. To see the client UI I installed cinnamon desktop(because unity does not have system tray and old solutions did'nt work to whitelist Novel clinet). I have iPrint plugin installed on firefox as well(I copied the shared object files to plugin directories). I try installing printers from provided ipp URL(which lists available printers on the server) with no success. After clicking the printer name I see this: I have various errors: Formerly firefox used to asked my network username/password for installing SSL printer but now it returns this: iPrint Printer - The printer is currently not available. However I can install non-SSL version but the printer location is either empty or points to: file:///dev/null even if I change it to the exact address which I see on working machines still it prints nothing. I have tried the novel command line tool, iprntcmd to print. It is being installed at: /opt/novell/iprint/bin/ msadeghi@werkstatt:/opt/novell/iprint/bin$ ./iprntcmd --addprinter ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me\ -\ IPP iprntcmd v05.04.00 Adding printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP. Added printer ipp://iprint.rz.hs-offenburg.de/ipp/Follow-me - IPP successfully. It adds the printer with empty location and again no print. What I found interesting is the log file at ~/.iprint/errors.txt with strange errors which I hope somebody here can understand. When I try to install the SSL printer I receive these logs(note that HP is my local printer and has nothing to do with iprint): Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:03 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:05 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 676 Group Info: CLIB Error Code: 0 (0x0) User ID: 1000 Error Msg: Success Debug Msg: MyCupsDoFileRequest - httpReconnect failed (0) Thu Oct 31 11:02:06 2013 Trace Info: mydoreq.c, line 1293 Group Info: CUPS-IPP Error Code: 1282 (0x502) User ID: 1000 Error Msg: iPrint Printer - The printer is currently not available. Debug Msg: MyCupsDoFileRequest - IPP SERVICE UNAVAILABLE Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:06 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6690 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for file:///dev/null - Unknown Port Type - file Thu Oct 31 11:02:08 2013 Trace Info: iprint.c, line 6800 Group Info: IPRINT-lib Error Code: 4096 (0x1000) User ID: 1000 Error Msg: iPrint Lib - Bad URI type supplied (not IPP:, HTTP:, or HTTPS:). Debug Msg: IPRINTInterpretURI for hp:/usb/HP_LaserJet_1018?serial=KP103A1 - No Port type specified I should say that my friend can print using the same instructions on CrunchBang easily and another guy on 12.04 LTS but with more struggling. It worked for me on linux mint maya with my old laptop as well. Is there anybody out there who can help me to solve these problems? I am really disappointed with Novell and our university support. PS. I had the same problemwith 13.04. No matter if I am within the network or I connect with VPN, I have the same issues.

    Read the article

  • Adding A Custom Dropdown in RCDC for Forefront Identity Manager 2010

    - by Daniel Lackey
    My latest exploration has been FIM 2010 for Identity Management. The following is a post of how to add a custom dropdown for the FIM Portal. I have decided to document this as I cannot find documentation on how to do this anywhere else. I hope that it finds useful to others.   For starters, this was to me not an easy task to figure out. I really would like to know why it is so cumbersome to do something that seems like a lot of people would need to do, but that’s for another day J   The dropdown I wanted to add was for ‘Account Status’ which would display if the account is ‘Enabled’ or ‘Disabled’ in the data source Active Directory. This option would also allow helpdesk users or admins to administer the userAccountControl attribute in AD from the FIM Portal interface.   The first thing I had to do was create the attribute itself. This is done by going to Administration à Schema Management from the FIM 2010 portal. Once here, you click on All Attributes. What is listed here are all attributes and their associated Resource Types in FIM. To create the ‘AccountStatus’ attribute, click on New. As shown below, enter ‘AccountStatus’ with no spaces for the System Name and ‘Account Status’ for the Display Name. The Data Type is going to be ‘Indexed String’. Click Next.           Leave everything on the Localization tab default and click Next.   On the Validation tab as shown below, we will enter the regex expression ^(Enabled|Disabled)?$ with our two desired string values ‘Enabled’ and ‘Disabled’. Click on Finish and then and Submit to complete adding the attribute.       The next step involves associating the attribute with a resource type. This is called ‘Binding’ the attribute. From the Schema Management page, click on All Bindings. From the page that comes up, click on New. As shown below, enter ‘User’ for the Resource Type and ‘Account Status’ for the Attribute Type. This is essentially binding the Account Status attribute to the ‘User’ Resource Type. Click Next.    On the ‘Attribute Override’ tab, type in ‘Account Status’ for the Display Name field. Click Next.   On the ‘Localization’ tab, click Next.   On the ‘Validation’ tab, enter the regex expression ^(Enabled|Disabled)?$ we entered previously for the attribute. Click Finish and then Submit to complete.   Now that the Attribute and the Binding are complete, you have to give users permission to see the attribute on the User Edit page. Go to Administration à Management Policy Rules. Look for the rule named Administration: Administrators can read and update Users and click on it. Once it opens, click on the ‘Target Resources’ tab and look at the section named Resource Attributes. Type in at the end the ‘Account Status’ attribute and check it with the validator. Once done click on OK to save the changes.         Lastly, we need to add the actual dropdown control to the RCDC (Resource Control Display Configuration) for User Editing. Go to Administration à Resource Control Display Configuration. From here navigate until you find the RCDC named Configuration for User Editing RCDC and click on it. The following is what you will see:       First step is to export the Configuration Data file. Click on the Export configuration link and save the file to your desktop of other folder.   Find the file you just exported and open the file in your XML editor of choice. I use notepad but anything will work. Since we are adding a dropdown control, first find another control in the existing file that is already a dropdown in FIM. I used EmployeeType as my example. Copy the control from the beginning tag named <my:Control… to the ending tag </my:Control>. Now take what you copied and paste it in whatever location you desire within the form between two other controls. I chose to place the ‘Account Status’ field after the ‘Account Name’ field. After you paste the control you will need to modify so it looks like this:       Notice where you specify what attribute you are dealing with where it has AccountStatus in the XML. Once you are complete with modifying this, save the file and make sure it is a .xml file.   Now go back to the Configuration for User Editing screen and look at the section named ‘Configuration Data’. Click the ‘Browse’ button and find the XML file you just modified and choose it. Click OK on the bottom of the window and you are done!   Now when you click on a user’s name in the FIM Portal, you should see the newly added dropdown box as below:       Later I will post more about this drop down, specifically on how to automate actually ‘Disabling’ the account in the data source through the FIM Workflows and MAs.   <my:Control my:Name="AccountStatus" my:TypeName="UocDropDownList" my:Caption="{Binding Source=schema, Path=AccountStatus.DisplayName}" my:Description="{Binding Source=schema, Path=AccountStatus.Description}" my:RightsLevel="{Binding Source=rights, Path=AccountStatus}"> <my:Properties> <my:Property my:Name="ValuePath" my:Value="Value"/> <my:Property my:Name="CaptionPath" my:Value="Caption"/> <my:Property my:Name="HintPath" my:Value="Hint"/> <my:Property my:Name="ItemSource" my:Value="{Binding Source=schema, Path=AccountStatus.LocalizedAllowedValues}"/> <my:Property my:Name="SelectedValue" my:Value="{Binding Source=object, Path=AccountStatus, Mode=TwoWay}"/> </my:Properties> </my:Control>

    Read the article

  • XNA Notes 009

    - by George Clingerman
    This past week the MVPs (myself included) were on Microsoft campus for the MVP summit. So I apologize in advance if you did something cool or heard of something cool happening with XNA and XBLIGs and it’s not in my notes. I did my best to stay on top of things, but honestly this community is fast and furious with what it’s doing and creating. I really can’t keep up and that’s fantastic! But here’s what I *did* notice while I was there on Microsoft Campus (and I did make sure to point out to the XNA team several of these very cool happenings while I had their ears). Time Critical XNA News: The XNA team wants you to know that Dream Build Play registration is now open! http://blogs.msdn.com/b/xna/archive/2011/02/28/registration-now-open-for-dream-build-play-2011-challenge.aspx Join the XNA-UK create on March 24, 2011 at the Microsoft Tech Days Conference http://xna-uk.net/blogs/darkgenesis/archive/2011/02/27/join-the-xna-uk-crew-at-the-microsoft-tech-days-conference-on-24th-march-2011.aspx XNA Team: Shawn Hargreaves shares one of the coolest things that’s happened in the XNA community http://blogs.msdn.com/b/shawnhar/archive/2011/03/02/xbox-indies-pivot-view.aspx Nick Gravelyn continues his unique marketing/work prioritization strategy as he tries to get to 5,000 Pixel Man users before he makes Pixel Man 2 (and he’s almost there!) http://nickgravelyn.com/pixelman2/ XNA MVPs: A lot of the XNA MVPs were at the Microsoft MVP Summit 2011. Due to NDAs, most things can’t be shared, but I’m sure if you’re curious you could ask them about the general vibe and feeling they got from the team and the future of XNA/XBLIG and more. Catalin Zima and team release the free WP7 game Chickens Can Dream http://twitter.com/CatalinZima/statuses/41174062923390976 http://www.amusedsloth.com/2011/02/chickens-can-dream-is-live/ Charles Humphrey (NemoKrad) posts his March talk source and PowerPoint http://xna-uk.net/blogs/randomchaos/archive/2011/03/04/march-2011-talk-post-processing-framework.aspx XNA Developers: Michael B. McLaughlin posts about ANTS Memory Profile and creates a CheckMemoryAllocationGame sample (extremely useful if you’re looking to see how much memory some operation allocates!) http://geekswithblogs.net/mikebmcl/archive/2011/02/28/ants-memory-profiler-7.0-review.aspx http://geekswithblogs.net/mikebmcl/archive/2011/03/01/checkmemoryallocationgame-sample.aspx Andy Schatz (2009 IGF winner for Monaco) talking XNA at GDC 2011 http://www.gamasutra.com/view/news/33313/GDC_2011_Andy_Schatz_Ill_Make_My_Last_Game_When_I_Die.php Xbox LIVE Indie Games (XBLIG): Clover: A Curious Tale by BinaryTweed is coming as a Deal of the Week during St. Patricks Day http://majornelson.com/archive/2011/03/03/comingsoontothexboxlivemarketplacemarchthird.aspx Ska Studios away at GDC but still very post happy as always http://www.ska-studios.com/2011/03/02/swamped-picture-pack/ http://www.ska-studios.com/2011/02/28/the-february-showcase/ http://www.ska-studios.com/2011/02/25/good-morning-gato-51-smelling-the-roses/ Just Press Start interviews Matthew Mikuszewski of Darkwind Media about Blocks Indie http://justpressstart.net/?p=516 Gamergeddon Xbox Indie Game Round Up - February 27th http://www.gamergeddon.com/2011/02/27/xbox-indie-game-round-up-february-27th/ http://www.gamergeddon.com/category/xbox-360/indie-games/ GameMarx does a round up of all the Xbox Live Indie Game podcasts that are currently available http://www.gamemarx.com/news/2011/02/27/xbox-live-indie-game-podcasts.aspx GameMarx episode 11 http://www.gamemarx.com/video/the-show/26/ep-11-february-25-2011.aspx In perhaps what I feel is the most exciting news I’ve heard all week, Michael C. Neel (ViNull of GameMarx fame) re-launch XboxIndies.com! http://www.gamemarx.com/news/2011/03/01/the-relaunch-of-xboxindies-com.aspx http://xboxindies.com/ Armless Octopus shares a little of what they heard from Luke Schneider of Radiangames during his GDC 2011 talk http://www.armlessoctopus.com/2011/03/02/gdc-2011-luke-schneider-offers-insight-into-radiangames-success/ VVGindiecast Episode 1 with guests Derek Strickland(Mr_Deeke), Kris Steele(Kriswd40 from FunInfused Games) and Dave Voyles(From armlessoctopus.com) http://vvgtv.com/2011/02/25/vvgindiecast-xblig-podcast/ If you’re doing Xbox LIVE Indie Game Reviews get in touch with XboxIndies.com to get into their aggregated feed http://forums.create.msdn.com/forums/p/76931/467189.aspx#467189 B.U.T.T.O.N and Flotilla represented XNA very well at the Independent Games Festival (are there any more games entered that were created using XNA? Stand up and be heard!) http://www.igf.com/php-bin/entry2011.php?id=374 Armless Ocotopus interview at GDC 2011 with Soulcaster creator Ian Stocker http://www.armlessoctopus.com/2011/03/04/gdc-2011-interview-with-soulcaster-creator-ian-stocker/ MommysBestGames gets a nod in the DarkBasic newsletter where it features the Explosionade Editor (just do a search for Explosionade to get to the interesting bits!) http://www.thegamecreators.com/pages/newsletters/newsletter_issue_98.html You may be hearing the cries of FortressCraft (coming soon to XBLIG) being so wrong for stealing the idea from MineCraft. But did you know the the game MineCraft started from was an XNA game called Infiniminer? XNA is getting it’s fingers into EVERYTHING! http://www.minecraftwiki.net/wiki/Infiniminer XNA Development: TorqueX is NOT dead thanks to the tremendous efforts of the XNA Community working on the CEV (special thanks to @PinoEire for all his hard work on making that happen!) http://www.garagegames.com/community/blogs/view/20878 http://torquecev.com/ Dave Henry has posted XNA 3.x adding platformer start kit to the network game state management on his new site http://twitter.com/#!/mort8088/status/43407715908853760 http://mort8088.com/2011/03/03/xna-3-x-adding-platformer-starter-kit-to-network-game-state-management/ Mark Bamford releases XNAViewer 4.0, great for running XNA games inside of a Windows Form (for building level editors, etc.) http://twitter.com/#!/xzodia04/status/43466830412660736 http://xnaviewer.codeplex.com/ Unit testing an XNA game with Resharper and NUnit http://smnbss.wordpress.com/2011/02/28/planetx-unit-testing-an-xna-game-with-resharper-and-nunit-wp7-xbox-xna/ XNA for Silverlight developers: Part 5 - Input (touch + gestures) http://ht.ly/1bxwUE Mike McLaughlin shares a link he stumbled across for those looking to understand vector and matrix math http://twitter.com/#!/mikebmcl/status/42587074725036032 http://chortle.ccsu.edu/VectorLessons/vectorIndex.html DigitalRune Resources Pooling in XNA (Part 1) http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/84/DigitalRune-Helper-Library-Resource-Pooling-in-XNA-Part-1.aspx JohnK “bobthecbuilder” released a new SunBurn Update that lowers the requirements for Windows Games http://twitter.com/#!/bobthecbuilder/status/43457306578522112 http://www.synapsegaming.com/blogs/johnk/archive/2011/03/03/sunburn-update-windows-redistributable.aspx Quick update on the Indiefreaks Game Framework v0.4 development status http://indiefreaks.com/2011/03/04/quick-update-on-igf-v0-4-development/

    Read the article

  • How to Use Windows’ Advanced Search Features: Everything You Need to Know

    - by Chris Hoffman
    You should never have to hunt down a lost file on modern versions of Windows — just perform a quick search. You don’t even have to wait for a cartoon dog to find your files, like on Windows XP. The Windows search indexer is constantly running in the background to make quick local searches possible. This enables the kind of powerful search features you’d use on Google or Bing — but for your local files. Controlling the Indexer By default, the Windows search indexer watches everything under your user folder — that’s C:\Users\NAME. It reads all these files, creating an index of their names, contents, and other metadata. Whenever they change, it notices and updates its index. The index allows you to quickly find a file based on the data in the index. For example, if you want to find files that contain the word “beluga,” you can perform a search for “beluga” and you’ll get a very quick response as Windows looks up the word in its search index. If Windows didn’t use an index, you’d have to sit and wait as Windows opened every file on your hard drive, looked to see if the file contained the word “beluga,” and moved on. Most people shouldn’t have to modify this indexing behavior. However, if you store your important files in other folders — maybe you store your important data a separate partition or drive, such as at D:\Data — you may want to add these folders to your index. You can also choose which types of files you want to index, force Windows to rebuild the index entirely, pause the indexing process so it won’t use any system resources, or move the index to another location to save space on your system drive. To open the Indexing Options window, tap the Windows key on your keyboard, type “index”, and click the Indexing Options shortcut that appears. Use the Modify button to control the folders that Windows indexes or the Advanced button to control other options. To prevent Windows from indexing entirely, click the Modify button and uncheck all the included locations. You could also disable the search indexer entirely from the Programs and Features window. Searching for Files You can search for files right from your Start menu on Windows 7 or Start screen on Windows 8. Just tap the Windows key and perform a search. If you wanted to find files related to Windows, you could perform a search for “Windows.” Windows would show you files that are named Windows or contain the word Windows. From here, you can just click a file to open it. On Windows 7, files are mixed with other types of search results. On Windows 8 or 8.1, you can choose to search only for files. If you want to perform a search without leaving the desktop in Windows 8.1, press Windows Key + S to open a search sidebar. You can also initiate searches directly from Windows Explorer — that’s File Explorer on Windows 8. Just use the search box at the top-right of the window. Windows will search the location you’ve browsed to. For example, if you’re looking for a file related to Windows and know it’s somewhere in your Documents library, open the Documents library and search for Windows. Using Advanced Search Operators On Windows 7, you’ll notice that you can add “search filters” form the search box, allowing you to search by size, date modified, file type, authors, and other metadata. On Windows 8, these options are available from the Search Tools tab on the ribbon. These filters allow you to narrow your search results. If you’re a geek, you can use Windows’ Advanced Query Syntax to perform advanced searches from anywhere, including the Start menu or Start screen. Want to search for “windows,” but only bring up documents that don’t mention Microsoft? Search for “windows -microsoft”. Want to search for all pictures of penguins on your computer, whether they’re PNGs, JPEGs, or any other type of picture file? Search for “penguin kind:picture”. We’ve looked at Windows’ advanced search operators before, so check out our in-depth guide for more information. The Advanced Query Syntax gives you access to options that aren’t available in the graphical interface. Creating Saved Searches Windows allows you to take searches you’ve made and save them as a file. You can then quickly perform the search later by double-clicking the file. The file functions almost like a virtual folder that contains the files you specify. For example, let’s say you wanted to create a saved search that shows you all the new files created in your indexed folders within the last week. You could perform a search for “datecreated:this week”, then click the Save search button on the toolbar or ribbon. You’d have a new virtual folder you could quickly check to see your recent files. One of the best things about Windows search is that it’s available entirely from the keyboard. Just press the Windows key, start typing the name of the file or program you want to open, and press Enter to quickly open it. Windows 8 made this much more obnoxious with its non-unified search, but unified search is finally returning with Windows 8.1.     

    Read the article

  • Add Zune Desktop Player to Windows 7 Media Center

    - by DigitalGeekery
    Are you a Zune owner who prefers the Zune player for media playback? Today we’ll show you how to integrate the Zune player with WMC using Media Center Studio. You’ll need to download Media Center Studio and the Zune Desktop player software. (See download links below) Also, make sure you have Media Center closed. Some of the actions in Media Center Studio cannot be performed while WMC is open. Open Media Center Studio and click on the Start Menu tab at the top of the application.   Click the Application button. Here we will create an Entry Point for the Zune player so that we can add it to Media Center. Type in a name for your entry point in the title text box. This is the name that will appear under the tile when added to the Media Center start menu. Next, type in the path to the Zune player. By default this should be C:\Program Files\Zune\Zune.exe. Note: Be sure to use the original path, not a link to the desktop icon.   The Active image is the image that will appear on the tile in Media Center. If you wish to change the default image, click the Browse button and select a different image. Select Stop the currently playing media from the When launched do the following: dropdown list.  Otherwise, if you open Zune player from WMC while playing another form of media, that media will continue to play in the background.   Now we will choose a keystroke to use to exit the Zune player software and return to Media Center. Click on the the green plus (+) button. When prompted, press a key to use to the close the Zune player. Note: This may also work with your Media Center remote. You may want to set a keyboard keystroke as well as a button on your remote to close the program. You may not be able to set certain remote buttons to close the application. We found that the back arrow button worked well. You can also choose a keystroke to kill the program if desired. Be sure to save your work before exiting by clicking the Save button on the Home tab.   Next, select the Start Menu tab and click on the next to Entry points to reveal the available entry points. Find the Zune player tile in the Entry points area. We want to drag the tile out onto one of the menu strips on the start menu. We will drag ours onto the Extras Library strip. When you begin to drag the tile, green plus (+) signs will appear in between the tiles. When you’ve dragged the tile over any of the green plus signs, the  red “Move” label will turn to a blue “Move to” label. Now you can drop the tile into position. Save your changes and then close Media Center Studio. When you open Media Center, you should see your Zune tile on the start menu. When you select the Zune tile in WMC, Media Center will be minimized and Zune player will be launched. Now you can enjoy your media through the Zune player. When you close Zune player with the previously assigned keystroke or by clicking the “X” at the top right, Windows Media Center will be re-opened. Conclusion We found the Zune player worked with two different Media Center remotes that we tested. It was a times a little tricky at times to tell where you were when navigating through the Zune software with a remote, but it did work. In addition to managing your music, the Zune player is a nice way to add podcasts to your Media Center setup. We should also mention that you don’t need to actually own a Zune to install and use the Zune player software. Media Center Studio works on both Vista and Windows 7. We covered Media Center Studio a bit more in depth in a previous post on customizing the Windows Media Center start menu. Are you new to Zune player? Familiarize yourself a bit more by checking out some of our earlier posts like how to update your Zune player, and experiencing your music a whole new way with Zune for PC.   Downloads Zune Desktop Player download Media Center Studio download Similar Articles Productive Geek Tips How To Rip a Music CD in Windows 7 Media CenterIntegrate Hulu Desktop and Windows Media Center in Windows 7Using Netflix Watchnow in Windows Vista Media Center (Gmedia)Fixing When Windows Media Player Library Won’t Let You Add FilesBuilt-in Quick Launch Hotkeys in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Bypass Waiting Time On Customer Service Calls With Lucyphone MELTUP – "The Beginning Of US Currency Crisis And Hyperinflation" Enable or Disable the Task Manager Using TaskMgrED Explorer++ is a Worthy Windows Explorer Alternative Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins

    Read the article

  • Computer Networks UNISA - Chap 12 &ndash; Networking Security

    - by MarkPearl
    After reading this section you should be able to Identify security risks in LANs and WANs and design security policies that minimize risks Explain how physical security contributes to network security Discuss hardware and design based security techniques Understand methods of encryption such as SSL and IPSec, that can secure data in storage and in transit Describe how popular authentication protocols such as RADIUS< TACACS,Kerberos, PAP, CHAP, and MS-CHAP function Use network operating system techniques to provide basic security Understand wireless security protocols such as WEP, WPA and 802.11i Security Audits Before spending time and money on network security, examine your networks security risks – rate and prioritize risks. Different organizations have different levels of network security requirements. Security Risks Not all security breaches result from a manipulation of network technology – there are human factors that can play a role as well. The following categories are areas of considerations… Risks associated with People Risks associated with Transmission and Hardware Risks associated with Protocols and Software Risks associated with Internet Access An effective security policy A security policy identifies your security goals, risks, levels of authority, designated security coordinator and team members, responsibilities for each team member, and responsibilities for each employee. In addition it specifies how to address security breaches. It should not state exactly which hardware, software, architecture, or protocols will be used to ensure security, nor how hardware or software will be installed and configured. A security policy must address an organizations specific risks. to understand your risks, you should conduct a security audit that identifies vulnerabilities and rates both the severity of each threat and its likelihood of occurring. Security Policy Content Security policy content should… Policies for each category of security Explain to users what they can and cannot do and how these measures protect the networks security Should define what confidential means to the organization Response Policy A security policy should provide for a planned response in the event of a security breach. The response policy should identify the members of a response team, all of whom should clearly understand the the security policy, risks, and measures in place. Some of the roles concerned could include… Dispatcher – the person on call who first notices the breach Manager – the person who coordinates the resources necessary to solve the problem Technical Support Specialist – the person who focuses on solving the problem Public relations specialist – the person who acts as the official spokesperson for the organization Physical Security An important element in network security is restricting physical access to its components. There are various techniques for this including locking doors, security people at access points etc. You should identify the following… Which rooms contain critical systems or data and must be secured Through what means might intruders gain access to these rooms How and to what extent are authorized personnel granted access to these rooms Are authentication methods such as ID cards easy to forge etc. Security in Network Design The optimal way to prevent external security breaches from affecting you LAN is not to connect your LAN to the outside world at all. The next best protection is to restrict access at every point where your LAN connects to the rest of the world. Router Access List – can be used to filter or decline access to a portion of a network for certain devices. Intrusion Detection and Prevention While denying someone access to a section of the network is good, it is better to be able to detect when an attempt has been made and notify security personnel. This can be done using IDS (intrusion detection system) software. One drawback of IDS software is it can detect false positives – i.e. an authorized person who has forgotten his password attempts to logon. Firewalls A firewall is a specialized device, or a computer installed with specialized software, that selectively filters or blocks traffic between networks. A firewall typically involves a combination of hardware and software and may reside between two interconnected private networks. The simplest form of a firewall is a packet filtering firewall, which is a router that examines the header of every packet of data it receives to determine whether that type of packet is authorized to continue to its destination or not. Firewalls can block traffic in and out of a LAN. NOS (Network Operating System) Security Regardless of the operating system, generally every network administrator can implement basic security by restricting what users are authorized to do on a network. Some of the restrictions include things related to Logons – place, time of day, total time logged in, etc Passwords – length, characters used, etc Encryption Encryption is the use of an algorithm to scramble data into a format that can be read only by reversing the algorithm. The purpose of encryption is to keep information private. Many forms of encryption exist and new ways of cracking encryption are continually being invented. The following are some categories of encryption… Key Encryption PGP (Pretty Good Privacy) SSL (Secure Sockets Layer) SSH (Secure Shell) SCP (Secure CoPy) SFTP (Secure File Transfer Protocol) IPSec (Internet Protocol Security) For a detailed explanation on each section refer to pages 596 to 604 of textbook Authentication Protocols Authentication protocols are the rules that computers follow to accomplish authentication. Several types exist and the following are some of the common authentication protocols… RADIUS and TACACS PAP (Password Authentication Protocol) CHAP and MS-CHAP EAP (Extensible Authentication Protocol) 802.1x (EAPoL) Kerberos Wireless Network Security Wireless transmissions are particularly susceptible to eavesdropping. The following are two wireless network security protocols WEP WPA

    Read the article

  • 8 Mac System Features You Can Access in Recovery Mode

    - by Chris Hoffman
    A Mac’s Recovery Mode is for more than just reinstalling Mac OS X. You’ll find many other useful troubleshooting utilities here — you can use these even if your Mac can’t boot normally. To access Recovery Mode, restart your Mac and press and hold the Command + R keys during the boot-up process. This is one of several hidden startup options on a Mac. Reinstall Mac OS X Most people know Recovery Mode as the place you go to reinstall OS X on your Mac. Recovery Mode will download the OS X installer files from teh Intenret if you don’t have them locally, so they don’t take up space on your disk and you’ll never have to hunt for an opearign system disc. Better yet, it will download up-to-date installation files so you don’t have to spend hours installing operating system updates later. Microsoft could learn a lot from Apple here. Restore From a Time Machine Backup Instead of reinstalling OS X, you can choose to restore your Mac from a time machine backup. This is like restoring a system image on another operating system. You’ll need an external disk containing a backup image created on the current computer to do this. Browse the Web The Get Help Online link opens the Safari web browser to Apple’s documentation site. It’s not limited to Apple’s website, though — you can navigate to any website you like. This feature allows you to access and use a browser on your Mac even if it isn’t booting properly. It’s ideal for looking up troubleshooting information. Manage Your Disks The Disk Utility option opens the same Disk Utility you can access from within Mac OS X. It allows you to partition disks, format them, scan disks for problems, wipe drives, and set up drives in a RAID configuration. If you need to edit partitions from outside your operating system, you can just boot into the recovery environment — you don’t have to download a special partitioning tool and boot into it. Choose the Default Startup Disk Click the Apple menu on the bar at the top of your screen and select Startup Disk to access the Choose Startup Disk tool. Use this tool to choose your computer’s default startup disk and reboot into another operating system. For example, it’s useful if you have Windows installed alongside Mac OS X with Boot Camp. Add or Remove an EFI Firmware Password You can also add a firmware password to your Mac. This works like a BIOS password or UEFI password on a Windows or Linux PC. Click the Utilities menu on the bar at the top of your screen and select Firmware Password Utility to open this tool. Use the tool to turn on a firmware password, which will prevent your computer from starting up from a different hard disk, CD, DVD, or USB drive without the password you provide. This prevents people form booting up your Mac with an unauthorized operating system. If you’ve already enabled a firmware password, you can remove it from here. Use Network Tools to Troubleshoot Your Connection Select Utilities > Network Utility to open a network diagnostic tool. This utility provides a graphical way to view your network connection information. You can also use the netstat, ping, lookup, traceroute, whois, finger, and port scan utilities from here. These can be helpful to troubleshoot Internet connection problems. For example, the ping command can demonstrate whether you can communicate with a remote host and show you if you’re experiencing packet loss, while the traceroute command can show you where a connection is failing if you can’t connect to a remote server. Open a Terminal If you’d like to get your hands dirty, you can select Utilities > Terminal to open a terminal from here. This terminal allows you to do more advanced troubleshooting. Mac OS X uses the bash shell, just as typical Linux distributions do. Most people will just need to use the Reinstall Mac OS X option here, but there are many other tools you can benefit from. If the Recovery Mode files on your Mac are damaged or unavailable, your Mac will automatically download them from Apple so you can use the full recovery environment.

    Read the article

  • Java ME SDK 3.2 is now live

    - by SungmoonCho
    Hi everyone, It has been a while since we released the last version. We have been very busy integrating new features and making lots of usability improvements into this new version. Datasheet is available here. Please visit Java ME SDK 3.2 download page to get the latest and best version yet! Some of the new features in this version are described below. Embedded Application SupportOracle Java ME SDK 3.2 now supports the new Oracle® Java ME Embedded. This includes support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). You can test and debug applications either on the built-in device emulators or on your device. Memory MonitorThe Memory Monitor shows memory use as an application runs. It displays a dynamic detailed listing of the memory usage per object in table form, and a graphical representation of the memory use over time. Eclipse IDE supportOracle Java ME SDK 3.2 now officially supports Eclipse IDE. Once you install the Java ME SDK plugins on Eclipse, you can start developing, debugging, and profiling your mobile or embedded application. Skin CreatorWith the Custom Device Skin Creator, you can create your own skins. The appearance of the custom skins is generic, but the functionality can be tailored to your own specifications.  Here are the release highlights. Implementation and support for the new Oracle® Java Wireless Client 3.2 runtime and the Oracle® Java ME Embedded runtime. The AMS in the CLDC emulators has a new look and new functionality (Install Application, Manage Certificate Authorities and Output Console). Support for JSR 228, the Information Module Profile-Next Generation API (IMP-NG). The IMP-NG platform is implemented as a subset of CLDC. Support includes: A new emulator for headless devices. Javadocs for the following Oracle APIs: Device Access API, Logging API, AMS API, and AccessPoint API. New demos for IMP-NG features can be run on the emulator or on a real device running the Oracle® Java ME Embedded runtime. New Custom Device Skin Creator. This tool provides a way to create and manage custom emulator skins. The skin appearance is generic, but the functionality, such as the JSRs supported or the device properties, are up to you. This utility only supported in NetBeans. Eclipse plugin for CLDC/MIDP. For the first time Oracle Java ME SDK is available as an Eclipse plugin. The Eclipse version does not support CDC, the Memory Monitor, and the Custom Device Skin Creator in this release. All Java ME tools are implemented as NetBeans plugins. As of the plugin integrates Java ME utilities into the standard NetBeans menus. Tools > Java ME menu is the place to launch Java ME utilities, including the new Skin Creator. Profile > Java ME is the place to work with the Network Monitor and the Memory Monitor. Use the standard NetBeans tools for debugging. Profiling, Network monitoring, and Memory monitoring are integrated with the NetBeans profiling tools. New network monitoring protocols are supported in this release: WMA, SIP, Bluetooth and OBEX, SATSA APDU and JCRMI, and server sockets. Java ME SDK Update Center. Oracle Java ME SDK can be updated or extended by new components. The Update Center can download, install, and uninstall plugins specific to the Java ME SDK. A plugin consists of runtime components and skins. Bug fixes and enhancements. This version comes with a few known problems. All of them have workarounds, so I hope you don't get stuck in these issues when you are using the product. It you cannot watch static variables during an Eclipse debugging session, and sometimes the Variable view cannot show data. In the source code, move the mouse over the required variable to inspect the variable value. A real device shown in the Device Selector is deleted from the Device Manager, yet it still appears. Kill the device manager in the system tray, and relaunch it. Then you will see the device removed from the list. On-device profiling does not work on a device. CPU profiling, networking monitoring, and memory monitoring do not work on the device, since the device runtime does not yet support it. Please do the profiling with your emulator first, and then test your application on the device. In the Device Selector, using Clean Database on real external device causes a null pointer exception. External devices do not have a database recognized by the SDK, so you can disregard this exception message. Suspending the Emulator during a Memory Monitor session hangs the emulator. Do not use the Suspend option (F5) while the Memory Monitor is running. If the emulator is hung, open the Windows task manager and stop the emulator process (javaw). To switch to another application while the Memory Monitor is running, choose Application > AMS Home (F4), and select a different application. Please let us know how we can improve it even better, by sending us your feedback. -Java ME SDK Team

    Read the article

  • 6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’

    - by Gopinath
    Humans quest for exploring the surrounding planets to see whether we can live there or not is taking new shape today. NASA’s Mars probing robot, Curiosity, blasted off today on its 9 months journey to reach Mars and explore it for the possibilities of life there. Scientist says that Curiosity is one most advanced rover ever launched to probe life on other planets. Here is the launch video and some analysis by a news reporter Lets look at the 6 interesting facts about the mission 1. It’s as big as a car Curiosity is the biggest ever rover ever launched by NASA to probe life on outer planets. It’s as big as a car and almost double the size of its predecessor rover Spirit. The length of Curiosity is around 9 feet 10 inches(3 meters), width is 9 feet 1 inch (2.8 meters) and height is 7 feet (2.1 meters). 2. Powered by Plutonium – Lasts 24×7 for 23 months The earlier missions of NASA to explore Mars are powered by Solar power and that hindered capabilities of the rovers to move around when the Sun is hiding. Due to dependency of Sun the earlier rovers were not able to traverse the places where there is no Sun light. Curiosity on the other hand is equipped with a radioisotope power system that generates electricity from the heat emitted by plutonium’s radioactive decay. The plutonium weighs around 10 pounds and can generate power required for operating the rover close to 23 weeks. The best part of the new power system is, Curiosity can roam around in darkness, light and all year around. 3. Rocket powered backpack for a science fiction style landing The Curiosity is so heavy that NASA could not use parachute and balloons to air-drop the rover on the surface of Mars like it’s previous missions. They are trying out a new science fiction style air-dropping mechanism that is similar to sky crane heavy-lift helicopter. The landing of the rover begins first with entry into the Mars atmosphere protected by a heat shield. At about 6 miles to the surface, the heat shield is jettisoned and a parachute is deployed to glide the rover smoothly. When the rover touches 3 miles above the surface, the parachute is jettisoned and the eight motors rocket backpack is used for a smooth and impact free landing as shown in the image. Here is an animation created by NASA on the landing sequence. If you are interested in getting more detailed information about the landing process check this landing sequence picture available on NASA website 4. Equipped with Star Wars style laser gun Hollywood movie directors and novelist always imagined aliens coming to earth with spaceships full of laser guns and blasting the objects which comes on their way. With Curiosity the equations are going to change. It has a powerful laser gun equipped in one of it’s arms to beam laser on rocks to vaporize them. This is not part of any assault mission Curiosity is expected to carry out, the laser gun is will be used to carry out experiments to detect life and understand nature. 5. Most sophisticated laboratory powered by 10 instruments Around 10 state of art instruments are part of Curiosity rover and the these 10 instruments form a most advanced rover based lab ever built by NASA. There are instruments to cut through rocks to examine them and other instruments will search for organic compounds. Mounted cameras can study targets from a distance, arm mounted instruments can study the targets they touch. Microscopic lens attached to the arm can see and magnify tiny objects as tiny as 12.5 micro meters. 6. Rover Carrying 1.24 million names etched on silicon Early June 2009 NASA launched a campaign called “Send Your Name to Mars” and around 1.24 million people registered their names through NASA’s website. All those 1.24 million names are etched on Silicon chips mounted onto Curiosity’s deck. If you had registered your name in the campaign may be your name is going to reach Mars soon. Curiosity On Web If you wish to follow the mission here are few links to help you NASA’s Curiosity Web Page Follow Curiosity on Facebook Follow @MarsCuriosity on Twitter Artistic Gallery Image of Mars Rover Curiosity A printable sheet of Curiosity Mission [pdf] Images credit: NASA This article titled,6 Interesting Facts About NASA’s Mars Rover ‘Curiosity’, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SmtpClient and Locked File Attachments

    - by Rick Strahl
    Got a note a couple of days ago from a client using one of my generic routines that wraps SmtpClient. Apparently whenever a file has been attached to a message and emailed with SmtpClient the file remains locked after the message has been sent. Oddly this particular issue hasn’t cropped up before for me although these routines are in use in a number of applications I’ve built. The wrapper I use was built mainly to backfit an old pre-.NET 2.0 email client I built using Sockets to avoid the CDO nightmares of the .NET 1.x mail client. The current class retained the same class interface but now internally uses SmtpClient which holds a flat property interface that makes it less verbose to send off email messages. File attachments in this interface are handled by providing a comma delimited list for files in an Attachments string property which is then collected along with the other flat property settings and eventually passed on to SmtpClient in the form of a MailMessage structure. The jist of the code is something like this: /// <summary> /// Fully self contained mail sending method. Sends an email message by connecting /// and disconnecting from the email server. /// </summary> /// <returns>true or false</returns> public bool SendMail() { if (!this.Connect()) return false; try { // Create and configure the message MailMessage msg = this.GetMessage(); smtp.Send(msg); this.OnSendComplete(this); } catch (Exception ex) { string msg = ex.Message; if (ex.InnerException != null) msg = ex.InnerException.Message; this.SetError(msg); this.OnSendError(this); return false; } finally { // close connection and clear out headers // SmtpClient instance nulled out this.Close(); } return true; } /// <summary> /// Configures the message interface /// </summary> /// <param name="msg"></param> protected virtual MailMessage GetMessage() { MailMessage msg = new MailMessage(); msg.Body = this.Message; msg.Subject = this.Subject; msg.From = new MailAddress(this.SenderEmail, this.SenderName); if (!string.IsNullOrEmpty(this.ReplyTo)) msg.ReplyTo = new MailAddress(this.ReplyTo); // Send all the different recipients this.AssignMailAddresses(msg.To, this.Recipient); this.AssignMailAddresses(msg.CC, this.CC); this.AssignMailAddresses(msg.Bcc, this.BCC); if (!string.IsNullOrEmpty(this.Attachments)) { string[] files = this.Attachments.Split(new char[2] { ',', ';' }, StringSplitOptions.RemoveEmptyEntries); foreach (string file in files) { msg.Attachments.Add(new Attachment(file)); } } if (this.ContentType.StartsWith("text/html")) msg.IsBodyHtml = true; else msg.IsBodyHtml = false; msg.BodyEncoding = this.Encoding; … additional code omitted return msg; } Basically this code collects all the property settings of the wrapper object and applies them to the SmtpClient and in GetMessage() to an individual MailMessage properties. Specifically notice that attachment filenames are converted from a comma-delimited string to filenames from which new attachments are created. The code as it’s written however, will cause the problem with file attachments not being released properly. Internally .NET opens up stream handles and reads the files from disk to dump them into the email send stream. The attachments are always sent correctly but the local files are not immediately closed. As you probably guessed the issue is simply that some resources are not automatcially disposed when sending is complete and sure enough the following code change fixes the problem: // Create and configure the message using (MailMessage msg = this.GetMessage()) { smtp.Send(msg); if (this.SendComplete != null) this.OnSendComplete(this); // or use an explicit msg.Dispose() here } The Message object requires an explicit call to Dispose() (or a using() block as I have here) to force the attachment files to get closed. I think this is rather odd behavior for this scenario however. The code I use passes in filenames and my expectation of an API that accepts file names is that it uses the files by opening and streaming them and then closing them when done. Why keep the streams open and require an explicit .Dispose() by the calling code which is bound to lead to unexpected behavior just as my customer ran into? Any API level code should clean up as much as possible and this is clearly not happening here resulting in unexpected behavior. Apparently lots of other folks have run into this before as I found based on a few Twitter comments on this topic. Odd to me too is that SmtpClient() doesn’t implement IDisposable – it’s only the MailMessage (and Attachments) that implement it and require it to clean up for left over resources like open file handles. This means that you couldn’t even use a using() statement around the SmtpClient code to resolve this – instead you’d have to wrap it around the message object which again is rather unexpected. Well, chalk that one up to another small unexpected behavior that wasted a half an hour of my time – hopefully this post will help someone avoid this same half an hour of hunting and searching. Resources: Full code to SmptClientNative (West Wind Web Toolkit Repository) SmtpClient Documentation MSDN © Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to Use An Antivirus Boot Disc or USB Drive to Ensure Your Computer is Clean

    - by Chris Hoffman
    If your computer is infected with malware, running an antivirus within Windows may not be enough to remove it. If your computer has a rootkit, the malware may be able to hide itself from your antivirus software. This is where bootable antivirus solutions come in. They can clean malware from outside the infected Windows system, so the malware won’t be running and interfering with the clean-up process. The Problem With Cleaning Up Malware From Within Windows Standard antivirus software runs within Windows. If your computer is infected with malware, the antivirus software will have to do battle with the malware. Antivirus software will try to stop the malware and remove it, while the malware will attempt to defend itself and shut down the antivirus. For really nasty malware, your antivirus software may not be able to fully remove it from within Windows. Rootkits, a type of malware that hides itself, can be even trickier. A rootkit could load at boot time before other Windows components and prevent Windows from seeing it, hide its processes from the task manager, and even trick antivirus applications into believing that the rootkit isn’t running. The problem here is that the malware and antivirus are both running on the computer at the same time. The antivirus is attempting to fight the malware on its home turf — the malware can put up a fight. Why You Should Use an Antivirus Boot Disc Antivirus boot discs deal with this by approaching the malware from outside Windows. You boot your computer from a CD or USB drive containing the antivirus and it loads a specialized operating system from the disc. Even if your Windows installation is completely infected with malware, the special operating system won’t have any malware running within it. This means the antivirus program can work on the Windows installation from outside it. The malware won’t be running while the antivirus tries to remove it, so the antivirus can methodically locate and remove the harmful software without it interfering. Any rootkits won’t be able to set up the tricks they use at Windows boot time to hide themselves from the rest o the operating system. The antivirus will be able to see the rootkits and remove them. These tools are often referred to as “rescue disks.” They’re meant to be used when you need to rescue a hopelessly infected system. Bootable Antivirus Options As with any type of antivirus software, you have quite a few options. Many antivirus companies offer bootable antivirus systems based on their antivirus software. These tools are generally free, even when they’re offered by companies that specialized in paid antivirus solutions. Here are a few good options: avast! Rescue Disk – We like avast! for offering a capable free antivirus with good detection rates in independent tests. avast! now offers the ability to create an antivirus boot disc or USB drive. Just navigate to the Tools -> Rescue Disk option in the avast! desktop application to create bootable media. BitDefender Rescue CD – BitDefender always seems to receive good scores in independent tests, and the BitDefender Rescue CD offers the same antivirus engine in the form of a bootable disc. Kaspersky Rescue Disk – Kaspersky also receives good scores in independent tests and offers its own antivirus boot disc. These are just a handful of options. If you prefer another antivirus for some reason — Comodo, Norton, Avira, ESET, or almost any other antivirus product — you’ll probably find that it offers its own system rescue disk. How to Use an Antivirus Boot Disc Using an antivirus boot disc or USB drive is actually pretty simple. You’ll just need to find the antivirus boot disc you want to use and burn it to disc or install it on a USB drive. You can do this part on any computer, so you can create antivirus boot media on a clean computer and then take it to an infected computer. Insert the boot media into the infected computer and then reboot. The computer should boot from the removable media and load the secure antivirus environment. (If it doesn’t, you may need to change the boot order in your BIOS or UEFI firmware.) You can then follow the instructions on your screen to scan your Windows system for malware and remove it. No malware will be running in the background while you do this. Antivirus boot discs are useful because they allow you to detect and clean malware infections from outside an infected operating system. If the operating system is severely infected, it may not be possible to remove — or even detect — all the malware from within it. Image Credit: aussiegall on Flickr     

    Read the article

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • C# 4.0: Named And Optional Arguments

    - by Paulo Morgado
    As part of the co-evolution effort of C# and Visual Basic, C# 4.0 introduces Named and Optional Arguments. First of all, let’s clarify what are arguments and parameters: Method definition parameters are the input variables of the method. Method call arguments are the values provided to the method parameters. In fact, the C# Language Specification states the following on §7.5: The argument list (§7.5.1) of a function member invocation provides actual values or variable references for the parameters of the function member. Given the above definitions, we can state that: Parameters have always been named and still are. Parameters have never been optional and still aren’t. Named Arguments Until now, the way the C# compiler matched method call definition arguments with method parameters was by position. The first argument provides the value for the first parameter, the second argument provides the value for the second parameter, and so on and so on, regardless of the name of the parameters. If a parameter was missing a corresponding argument to provide its value, the compiler would emit a compilation error. For this call: Greeting("Mr.", "Morgado", 42); this method: public void Greeting(string title, string name, int age) will receive as parameters: title: “Mr.” name: “Morgado” age: 42 What this new feature allows is to use the names of the parameters to identify the corresponding arguments in the form: name:value Not all arguments in the argument list must be named. However, all named arguments must be at the end of the argument list. The matching between arguments (and the evaluation of its value) and parameters will be done first by name for the named arguments and than by position for the unnamed arguments. This means that, for this method definition: public static void Method(int first, int second, int third) this call declaration: int i = 0; Method(i, third: i++, second: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = i++; int CS$0$0001 = ++i; Method(i, CS$0$0001, CS$0$0000); which will give the method the following parameter values: first: 2 second: 2 third: 0 Notice the variable names. Although invalid being invalid C# identifiers, they are valid .NET identifiers and thus avoiding collision between user written and compiler generated code. Besides allowing to re-order of the argument list, this feature is very useful for auto-documenting the code, for example, when the argument list is very long or not clear, from the call site, what the arguments are. Optional Arguments Parameters can now have default values: public static void Method(int first, int second = 2, int third = 3) Parameters with default values must be the last in the parameter list and its value is used as the value of the parameter if the corresponding argument is missing from the method call declaration. For this call declaration: int i = 0; Method(i, third: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = ++i; Method(i, 2, CS$0$0000); which will give the method the following parameter values: first: 1 second: 2 third: 1 Because, when method parameters have default values, arguments can be omitted from the call declaration, this might seem like method overloading or a good replacement for it, but it isn’t. Although methods like this: public static StreamReader OpenTextFile( string path, Encoding encoding = null, bool detectEncoding = true, int bufferSize = 1024) allow to have its calls written like this: OpenTextFile("foo.txt", Encoding.UTF8); OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4096); OpenTextFile( bufferSize: 4096, path: "foo.txt", detectEncoding: false); The complier handles default values like constant fields taking the value and useing it instead of a reference to the value. So, like with constant fields, methods with parameters with default values are exposed publicly (and remember that internal members might be publicly accessible – InternalsVisibleToAttribute). If such methods are publicly accessible and used by another assembly, those values will be hard coded in the calling code and, if the called assembly has its default values changed, they won’t be assumed by already compiled code. At the first glance, I though that using optional arguments for “bad” written code was great, but the ability to write code like that was just pure evil. But than I realized that, since I use private constant fields, it’s OK to use default parameter values on privately accessed methods.

    Read the article

  • No, iCloud Isn’t Backing Them All Up: How to Manage Photos on Your iPhone or iPad

    - by Chris Hoffman
    Are the photos you take with your iPhone or iPad backed up in case you lose your device? If you’re just relying on iCloud to manage your important memories, your photos may not be backed up at all. Apple’s iCloud has a photo-syncing feature in the form of “Photo Stream,” but Photo Stream doesn’t actually perform any long-term backups of your photos. iCloud’s Photo Backup Limitations Assuming you’ve set up iCloud on your iPhone or iPad, your device is using a feature called “Photo Stream” to automatically upload the photos you take to your iCloud storage and sync them across your devices. Unfortunately, there are some big limitations here. 1000 Photos: Photo Stream only backs up the latest 1000 photos. Do you have 1500 photos in your Camera Roll folder on your phone? If so, only the latest 1000 photos are stored in your iCloud account online. If you don’t have those photos backed up elsewhere, you’ll lose them when you lose your phone. If you have 1000 photos and take one more, the oldest photo will be removed from your iCloud Photo Stream. 30 Days: Apple also states that photos in your Photo Stream will be automatically deleted after 30 days “to give your devices plenty of time to connect and download them.” Some people report photos aren’t deleted after 30 days, but it’s clear you shouldn’t rely on iCloud for more than 30 days of storage. iCloud Storage Limits: Apple only gives you 5 GB of iCloud storage space for free, and this is shared between backups, documents, and all other iCloud data. This 5 GB can fill up pretty quickly. If your iCloud storage is full and you haven’t purchased any more storage more from Apple, your photos aren’t being backed up. Videos Aren’t Included: Photo Stream doesn’t include videos, so any videos you take aren’t automatically backed up. It’s clear that iCloud’s Photo Stream isn’t designed as a long-term way to store your photos, just a convenient way to access recent photos on all your devices before you back them up for real. iCloud’s Photo Stream is Designed for Desktop Backups If you have a Mac, you can launch iPhoto and enable the Automatic Import option under Photo Stream in its preferences pane. Assuming your Mac is on and connected to the Internet, iPhoto will automatically download photos from your photo stream and make local backups of them on your hard drive. You’ll then have to back up your photos manually so you don’t lose them if your Mac’s hard drive ever fails. If you have a Windows PC, you can install the iCloud Control Panel, which will create a Photo Stream folder on your PC. Your photos will be automatically downloaded to this folder and stored in it. You’ll want to back up your photos so you don’t lose them if your PC’s hard drive ever fails. Photo Stream is clearly designed to be used along with a desktop application. Photo Stream temporarily backs up your photos to iCloud so iPhoto or iCloud Control Panel can download them to your Mac or PC and make a local backup before they’re deleted. You could also use iTunes to sync your photos from your device to your PC or Mac, but we don’t really recommend it — you should never have to use iTunes. How to Actually Back Up All Your Photos Online So Photo Stream is actually pretty inconvenient — or, at least, it’s just a way to temporarily sync photos between your devices without storing them long-term. But what if you actually want to automatically back up your photos online without them being deleted automatically? The solution here is a third-party app that does this for you, offering the automatic photo uploads with long-term storage. There are several good services with apps in the App Store: Dropbox: Dropbox’s Camera Upload feature allows you to automatically upload the photos — and videos — you take to your Dropbox account. They’ll be easily accessible anywhere there’s a Dropbox app and you can get much more free Dropbox storage than you can iCloud storage. Dropbox will never automatically delete your old photos. Google+: Google+ offers photo and video backups with its Auto Upload feature, too. Photos will be stored in your Google+ Photos — formerly Picasa Web Albums — and will be marked as private by default so no one else can view them. Full-size photos will count against your free 15 GB of Google account storage space, but you can also choose to upload an unlimited amount of photos at a smaller resolution. Flickr: The Flickr app is no longer a mess. Flickr offers an Auto Upload feature for uploading full-size photos you take and free Flickr accounts offer a massive 1 TB of storage for you to store your photos. The massive amount of free storage alone makes Flickr worth a look. Use any of these services and you’ll get an online, automatic photo backup solution you can rely on. You’ll get a good chunk of free space, your photos will never be automatically deleted, and you can easily access them from any device. You won’t have to worry about storing local copies of your photos and backing them up manually. Apple should fix this mess and offer a better solution for long-term photo backup, especially considering the limitations aren’t immediately obvious to users. Until they do, third-party apps are ready to step in and take their place. You can also automatically back up your photos to the web on Android with Google+’s Auto Upload or Dropbox’s Camera Upload. Image Credit: Simon Yeo on Flickr     

    Read the article

  • Top 10 Tips & Tricks for Oracle SQL Developer

    - by thatjeffsmith
    Being a short week due to the holiday, and with everyone enjoying their Summer vacations (apologies Southern Hemispherians), I reckoned it was a great time to do one of those lazy recap-Top 10-Reader’s Digest type posts. I’ve been sharing 1-3 tips or ‘tricks’ a week since I started blogging about SQL Developer, and I have more than enough content to write a book. But since I’m lazy, I’m just going to compile a list of my favorite ‘must know’ tips instead. I always have to leave out a few tips when I do my presentations, so now I can refer back to this list to make sure I’m not forgetting anything. So without further ado… 1. Configure Your Preferences Yes, there are a LOT of options. But you don’t need to worry about all of them just yet. I do recommend you take a quick look at these ones in particular. Whether you’re new to the tool or have been using it for 5 years, don’t overlook these settings! 2. Disable Extensions You Aren’t Using If you’re not using Data Miner, or if you’re not working on a Migration – disable those extensions! SQL Developer will run leaner & meaner, plus the user interface will be a bit more simplified making the tool easier to navigate as well. 3. SQL Recall via Keyboard Access your history via the keyboard! Cycle through your recent SQL statements just using these magic key strokes! Ctrl+Up or Ctrl+Down. 4. Format Your Query Output Directly to CSV, XML, HTML, etc Have the query results pre-formatted in the format of your choice! Too lazy to run the Export wizard for your query result sets? Just add the SQL Developer output hints to your statement and have the output auto-magically formatted to the style of your choice! 5. Drag & Drop Multiple Tables to the Worksheet SQL Developer will auto-join the related objects. You can then toggle over to the Query Builder to toggle off the columns you don’t want to query. I guarantee this tip will save you time if you’re joining 3 or more tables! 6. Drag & Drop Multiple Tables to a Relational Model A pretty picture is worth a few dozen DDL scripts? SQL Developer does data modeling! If you ctrl-drag a table to a model, it will take that table and any related tables and reverse engineer them to a relational model! You can then print it out or export it to HTML, PDF, etc. 7. View Your PL/SQL Execution Output Automatically Function returns a refcursor? Procedure had 3 out parameters? When you run these programs via the Procedure Editor, we automatically capture the output and place them into one or more data grids for you to browse. 8. Disable Automatic Code Insight and Use It On-Demand Code Editor – Completion Insight – Enable Completion Auto-Popup (Keyword being Auto) Some folks really don’t like it when their IDEs or word-processors try to do ‘too much’ for them. Thankfully SQL Developer allows you to either increase the delay before it attempts to auto-complete your text OR to disable the automatic bit. Instead, you can invoke it on-demand. 9. Interactive Debugging – Change Your Variable Values as You Step Through Your PLSQL Watches aren’t just for watching. You can actually interact with your programs and ‘see what happens’ when X = 256 instead of 1. 10. Ditch the Tree View for the Schema Browser There’s nothing wrong with the Connection tree for browsing your database objects. But some folks just can’t seem to get comfortable with it. So, we built them a Schema Browser that uses a drop down control instead for changing up your schema and object types. Already Know This Stuff, Want More? Just check out my SQL Developer resource page, it’s one of the main links on the top of this page. Or if you can’t find something, just drop me a note in the form of a comment on this page and I’ll do my best to find it or write it for you.

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • It’s nice to be important, but it’s more important to be nice

    - by BuckWoody
    I’ve been a little “preachy” lately, telling you that you should let people finish their sentences, and always check a problem out before you tell a user that their issue is “impossible”. Well, I’ll round that out with one more tip today. Keep in mind that all of these things are actions I’ve been guilty of, hopefully in the past. I’m kind of a “work in progress”. And yes, I know these tips are coming from someone who picks on people in presentations, but that is of course done in fun, and (hopefully) with the audience’s knowledge.   (No, this isn’t aimed at any one person or event in particular – I just see it happen a lot)   I’ve seen, unfortunately over and over, someone in authority react badly to someone who is incorrect, or at least perceived to be incorrect. This might manifest itself in a comment, post, question or whatever, but the point is that I’ve seen really intelligent people literally attack someone they view as getting something wrong. Don’t misunderstand me; if someone posts that you should always drop a production database in the middle of the day I think you should certainly speak up and mention that this might be a bad idea!  No, I’m talking about generalizations or even incorrect statements done in good faith. Let me explain with an example.   Suppose someone makes the statement: “If you don’t have enough space on your system, you can just use a DBCC command to shrink the database”. Let’s take two responses to this statement.   Response One: “That’s insane. Everyone knows that shrinking a database is a stupid idea, you’re just going to fragment your indexes all over the place.” Response Two: “That’s an interesting take – in my experience and from what I’ve read here (someurl.com) I think this might not be a universal best practice.”   Of course, both responses let the person making the statement and those reading it know that you don’t agree, and that it’s probably wrong. But the person you responded to and the general audience hearing you (or reading your response) might form two different opinions of you.   The first response says to me “this person really needs to be right, and takes arguments personally. They aren’t thinking of the other person at all, or the folks reading or hearing the exchange. They turned an incorrect technical statement into a personal attack. They haven’t left the other party any room to ‘save face’, and they have potentially turned what could be a positive learning experience for everyone into a negative. Also, they sound more than just a little arrogant.”   The second response says to me “this person has left room for everyone to save face, has presented evidence to the contrary and is thinking about moving the ball forward and getting it right rather than attacking someone for getting it wrong.” It’s the idea of questioning a statement rather than attacking a person.   Perhaps you have a different take. Maybe you think the “direct” approach is best – and maybe that’s worked for you. Something to consider is what you’ve really accomplished while using that first method. Sure, the info you provide is correct, and perhaps someone out there won’t shrink a database because of your response – but perhaps you’ve turned a lot more people off, and now they won’t listen to your other valuable information. You’ll be an expert, but another one of the nameless, arrogant jerks in technology. And I don’t think anyone likes to be thought of that way.   OK, I’ll get down off of the high-horse now. And I’ll keep the title of this entry (said to me by my grandmother when I was a little kid) in mind when I dismount. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Getting selected row in inputListOfValues returnPopupListener

    - by Frank Nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Model driven list-of-values in Oracle ADF are configured on the ADF Business component attribute which should be updated with the user value selection. The value lookup can be configured to be displayed as a select list, combo box, input list of values or combo box with list of values. Displaying the list in an af:inputListOfValues component shows the attribute value in an input text field and with an icon attached to it for the user to launch the list-of-values dialog. The list-of-values dialog allows users to use a search form to filter the lookup data list and to select an entry, which return value then is added as the value of the af:inputListOfValues component. Note: The model driven LOV can be configured in ADF Business Components to update multiple attributes with the user selection, though the most common use case is to update the value of a single attribute. A question on OTN was how to access the row of the selected return value on the ADF Faces front end. For this, you need to know that there is a Model property defined on the af:inputListOfValues that references the ListOfValuesModel implementation in the model. It is the value of this Model property that you need to get access to. The af:inputListOfValues has a ReturnPopupListener property that you can use to configure a managed bean method to receive notification when the user closes the LOV popup dialog by selecting the Ok button. This listener is not triggered when the cancel button is pressed. The managed bean signature can be created declaratively in Oracle JDeveloper 11g using the Edit option in the context menu next to the ReturnPopupListener field in the PropertyInspector. The empty method signature looks as shown below public void returnListener(ReturnPopupEvent returnPopupEvent) { } The ReturnPopupEvent object gives you access the RichInputListOfValues component instance, which represents the af:inputListOfValues component at runtime. From here you access the Model property of the component to then get a handle to the CollectionModel. The CollectionModel returns an instance of JUCtrlHierBinding in its getWrappedData method. Though there is no tree binding definition for the list of values dialog defined in the PageDef, it exists. Once you have access to this, you can read the row the user selected in the list of values dialog. See the following code: public void returnListener(ReturnPopupEvent returnPopupEvent) {   //access UI component instance from return event RichInputListOfValues lovField =        (RichInputListOfValues)returnPopupEvent.getSource();   //The LOVModel gives us access to the Collection Model and //ADF tree binding used to populate the lookup table ListOfValuesModel lovModel =  lovField.getModel(); CollectionModel collectionModel =          lovModel.getTableModel().getCollectionModel();     //The collection model wraps an instance of the ADF //FacesCtrlHierBinding, which is casted to JUCtrlHierBinding   JUCtrlHierBinding treeBinding =          (JUCtrlHierBinding) collectionModel.getWrappedData();     //the selected rows are defined in a RowKeySet.As the LOV table only   //supports single selections, there is only one entry in the rks RowKeySet rks = (RowKeySet) returnPopupEvent.getReturnValue();     //the ADF Faces table row key is a list. The list contains the //oracle.jbo.Key List tableRowKey = (List) rks.iterator().next();   //get the iterator binding for the LOV lookup table binding   DCIteratorBinding dciter = treeBinding.getDCIteratorBinding();   //get the selected row by its JBO key   Key key = (Key) tableRowKey.get(0); Row rw =  dciter.findRowByKeyString(key.toStringFormat(true)); //work with the row // ... }

    Read the article

  • SQL SERVER – Beginning of SQL Server Architecture – Terminology – Guest Post

    - by pinaldave
    SQL Server Architecture is a very deep subject. Covering it in a single post is an almost impossible task. However, this subject is very popular topic among beginners and advanced users.  I have requested my friend Anil Kumar who is expert in SQL Domain to help me write  a simple post about Beginning SQL Server Architecture. As stated earlier this subject is very deep subject and in this first article series he has covered basic terminologies. In future article he will explore the subject further down. Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. In this Article we will discuss about MS SQL Server architecture. The major components of SQL Server are: Relational Engine Storage Engine SQL OS Now we will discuss and understand each one of them. 1) Relational Engine: Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned. Different Tasks of Relational Engine: Query Processing Memory Management Thread and Task Management Buffer Management Distributed Query Processing 2) Storage Engine: Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the following diagram. When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database. Let’s understand data file and log file in more details: Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents. Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, System, Object data etc. On the other hand, Uniform extents are dedicated to only one type. Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them: Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data. Index: It stores the index entries. Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max),  varbinary(max), image and xml data. GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents. PFS (Page Free Space): Information related to page allocation and unused space available on pages. IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit. BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation. DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit. Log File: It also known as write ahead log. It stores modification to the database (DML and DDL). Sufficient information is logged to be able to: Roll back transactions if requested Recover the database in case of failure Write Ahead Logging is used to create log entries Transaction logs are written in chronological order in a circular way Truncation policy for logs is based on the recovery model SQL OS: This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc. I guess this brief article gives you an idea about the various terminologies used related to SQL Server Architecture. In future articles we will explore them further. Guest Author  The author of the article is Anil Kumar Yadav is Trainer, SQL Domain, Koenig Solutions. Koenig is a premier IT training firm that provides several IT certifications, such as Oracle 11g, Server+, RHCA, SQL Server Training, Prince2 Foundation etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

< Previous Page | 702 703 704 705 706 707 708 709 710 711 712 713  | Next Page >