Search Results

Search found 6523 results on 261 pages for 'flash cs5'.

Page 154/261 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Cross-platform, cross-browser way to play sound using jQuery 1.4?

    - by Erin
    I'm trying to get jQuery to play a sound on element hover/click. (It's like a flash website without flash) I've tried the methods recommended in Cross-platform, cross-browser way to play sound from Javascript?, the jQuery Sound plugin and a couple of other tutorials without any success. I'm assuming this is because they haven't been updated since 2008. Does anyone have any recommendations?

    Read the article

  • Authlogic Help! Registering a new user when currently logged-in as a user not working.

    - by looloobs
    Hi Just as a disclaimer I am new to rails and programming in general so apologize for misunderstanding something obvious. I have Authlogic with activation up and running. So for my site I would like my users who are logged in to be able to register other users. The new user would pick their login and password through the activation email, but the existing user needs to put them in by email, position and a couple other attributes. I want that to be done by the existing user. The problem I am running into, if I am logged in and then try and create a new user it just tries to update the existing user and doesn't create a new one. I am not sure if there is some way to fix this by having another session start??? If that is even right/possible I wouldn't know how to go about implementing it. I realize without knowing fully about my application it may be difficult to answer this, but does this even sound like the right way to go about this? Am I missing something here? Users Controller: class UsersController < ApplicationController before_filter :require_no_user, :only => [:new, :create] before_filter :require_user, :only => [:show, :edit, :update] def new @user = User.new end def create @user = User.new if @user.signup!(params) @user.deliver_activation_instructions! flash[:notice] = "Your account has been created. Please check your e-mail for your account activation instructions!" redirect_to profile_url else render :action => :new end end def show @user = @current_user end def edit @user = @current_user end def update @user = @current_user # makes our views "cleaner" and more consistent if @user.update_attributes(params[:user]) flash[:notice] = "Account updated!" redirect_to profile_url else render :action => :edit end end end My User_Session Controller: class UserSessionsController < ApplicationController before_filter :require_no_user, :only => [:new, :create] before_filter :require_user, :only => :destroy def new @user_session = UserSession.new end def create @user_session = UserSession.new(params[:user_session]) if @user_session.save flash[:notice] = "Login successful!" if @user_session.user.position == 'Battalion Commander' : redirect_to battalion_path(@user_session.user.battalion_id) else end else render :action => :new end end def destroy current_user_session.destroy flash[:notice] = "Logout successful!" redirect_back_or_default new_user_session_url end end

    Read the article

  • Building a decision-making game in jQuery? Where would I store data....

    - by redconservatory
    I built a slideshow/decision-making game in Flash but would like to try to redo it using jQuery. The slideshow part seems simple enough, however I have a series of user decisions that I'm not sure how to approach. In flash, if the user makes a decision, I would just store this in a variable or shared local objects, is this the same for jQuery? i.e. mix regular javascript variables with the jQuery?

    Read the article

  • (Strange) C++ linker error in constructor

    - by Microkernel
    I am trying to write a template class in C++ and getting this strange linker error and can't figureout the cause, please let me know whats wrong with this! Here is the error message I am getting in Visula C++ 2010. 1>------ Rebuild All started: Project: FlashEmulatorTemplates, Configuration: Debug Win32 ------ 1> main.cpp 1> emulator.cpp 1> Generating Code... 1>main.obj : error LNK2019: unresolved external symbol "public: __thiscall flash_emulator<char>::flash_emulator<char>(char const *,struct FLASH_PROPERTIES *)" (??0?$flash_emulator@D@@QAE@PBDPAUFLASH_PROPERTIES@@@Z) referenced in function _main 1>C:\Projects\FlashEmulator_templates\VS\FlashEmulatorTemplates\Debug\FlashEmulatorTemplates.exe : fatal error LNK1120: 1 unresolved externals ========== Rebuild All: 0 succeeded, 1 failed, 0 skipped ========== Error message in g++ main.cpp: In function âint main()â: main.cpp:8: warning: deprecated conversion from string constant to âchar*â /tmp/ccOJ8koe.o: In function `main': main.cpp:(.text+0x21): undefined reference to `flash_emulator<char>::flash_emulator(char*, FLASH_PROPERTIES*)' collect2: ld returned 1 exit status There are 2 .cpp files and 1 header file, and I have given them below. emulator.h #ifndef __EMULATOR_H__ #define __EMULATOR_H__ typedef struct { int property; }FLASH_PROPERTIES ; /* Flash emulation class */ template<class T> class flash_emulator { private: /* Private data */ int key; public: /* Constructor - Opens an existing flash by name flashName or creates one with given FLASH_PROPERTIES if it doesn't exist */ flash_emulator( const char *flashName, FLASH_PROPERTIES *properties ); /* Constructor - Opens an existing flash by name flashName or creates one with given properties given in configFIleName */ flash_emulator<T>( char *flashName, char *configFileName ); /* Destructor for the emulator */ ~flash_emulator(){ } }; #endif /* End of __EMULATOR_H__ */ emulator.cpp #include <Windows.h> #include "emulator.h" using namespace std; template<class T>flash_emulator<T>::flash_emulator( const char *flashName, FLASH_PROPERTIES *properties ) { return; } template<class T>flash_emulator<T>::flash_emulator(char *flashName, char *configFileName) { return; } main.cpp #include <Windows.h> #include "emulator.h" int main() { FLASH_PROPERTIES properties = {0}; flash_emulator<char> myEmulator("C:\newEMu.flash", &properties); return 0; }

    Read the article

  • Convert TIFF with Alpha Channel to PNG?

    - by Michael Stum
    I have a TIFF File that has a blue background and an Alpha Channel. I would like to save a PNG File which has the blue background transparent, using the Alpha Channel (because some pixels are not fully transparent) I use Photoshop CS5 Standard, but I haven't found an option there. I don't want to use the Magic Wand because it struggles with half-transparent pixels and because I do have a perfect Alpha Channel. Any ideas how this can be done?

    Read the article

  • web.config 1.0 and 3.5

    - by kareemsaad
    I had aproblem I had two web site one of them with .net 3.5 and other with 1.0 and I want to run 1.0 in 3.5 but I couldnot and i tried to copy some section from 1.0 to 3.5 as I copied this section <add key="ConnectionString" value="data source=.;persist security info=True;user id=Sharp;Password=#Sharp;initial catalog=Sharp" /> <add key="ACProduct_Upload" value="SiteUploads/ACProducts/" /> <add key="ACPDF_Upload" value="SiteUploads/ACPDF/" /> <add key="ACCertifications_Upload" value="SiteUploads/ACCertifications/" /> <add key="Certifications_Upload" value="SiteUploads/Certifications/" /> <add key="flash" value="flash/" /> <add key="Product_Upload" value="SiteUploads/Products/" /> <add key="PDF_Upload" value="SiteUploads/PDF/" /> <add key="HTML_Upload" value="SiteUploads/HTML_Uploads/" /> <add key="WallPaper_Upload" value="SiteUploads/WallPapers/" /> <add key="News_Upload" value="SiteUploads/News/" /> <add key="mailserver" value="111.111.1.1" /> <add key="fromEmail" value="[email protected]" /> </appSettings> to 3.5 here <add key="ACProduct_Upload" value="SiteUploads/ACProducts/" /> <add key="ACPDF_Upload" value="SiteUploads/ACPDF/" /> <add key="ACCertifications_Upload" value="SiteUploads/ACCertifications/" /> <add key="Certifications_Upload" value="SiteUploads/Certifications/" /> <add key="flash" value="flash/" /> <add key="Product_Upload" value="SiteUploads/Products/" /> <add key="PDF_Upload" value="SiteUploads/PDF/" /> <add key="HTML_Upload" value="SiteUploads/HTML_Uploads/" /> <add key="WallPaper_Upload" value="SiteUploads/WallPapers/" /> <add key="News_Upload" value="SiteUploads/News/" /> <add key="mailserver" value="192.168.6.3" /> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere"/> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> </sectionGroup> </sectionGroup> </sectionGroup> <add name="Sharp" connectionString="Data Source=.;Initial Catalog=SharpHA;User ID=SharpHA;Password=#SharpHA;" providerName="System.Data.SqlClient"/> </connectionStrings> but this problem apear Only one element allowed per config file and if present must be the first child of the root element. please any one help me

    Read the article

  • Compatiblity of 64 bit systems

    - by Bill
    I am running Windows Vista home premium 32bit on an Intel core2 duo E7300,which I believe is actually a processor capable of running 64bit.I also run Adobe photoshop CS5 which i believe could utilise the 64 bit processor and more ram.Could I change my operating system to 64bit and would I gain anything from doing so.

    Read the article

  • Where are the default Adobe Illustrator fonts found when installed?

    - by EdenMachine
    I need a copy of the Myriad-Roman font that is the default font for Adobe Illustrator. I have Illustrator installed but I need to put the font on my sever so my PDF DLL can read the font since my PDF is using that Myriad-Roman font. Please don't get distracted about the PDF part - I just need to know how to find the font wherever Adobe installs it when I install Illustrator (CS5). TIA!

    Read the article

  • Photoshop changes tablet screen area

    - by Zonda333
    Usually my Wacom Bamboo tablets screen area (Area which cursor can go using tablet) is as set - One (1280x1024) monitor. However, when I run any Adobe Product, such as PhotoShop or Illustrator (CS5 or CS4), and it is the active window, my screen area reduces to around 400x400, effectively making itself absolutely useless. If another program is the active window, it works fine. Running Windows 7 32-bit, with up-to-date tablet drivers. Does anyone know a fix for this?

    Read the article

  • CLSF & CLK 2013 Trip Report by Jeff Liu

    - by jamesmorris
    This is a contributed post from Jeff Liu, lead XFS developer for the Oracle mainline Linux kernel team. Recently, I attended both the China Linux Storage and Filesystem workshop (CLSF), and the China Linux Kernel conference (CLK), which were held in Shanghai. Here are the highlights for both events. CLSF - 17th October XFS update (led by Jeff Liu) XFS keeps rapid progress with a lot of changes, especially focused on the infrastructure/performance improvements as well as  new feature development.  This can be reflected with a sample statistics among XFS/Ext4+JBD2/Btrfs via: # git diff --stat --minimal -C -M v3.7..v3.12-rc4 -- fs/xfs|fs/ext4+fs/jbd2|fs/btrfs XFS: 141 files changed, 27598 insertions(+), 19113 deletions(-) Ext4+JBD2: 39 files changed, 10487 insertions(+), 5454 deletions(-) Btrfs: 70 files changed, 19875 insertions(+), 8130 deletions(-) What made up those changes in XFS? Self-describing metadata(CRC32c). This is a new feature and it contributed about 70% code changes, it can be enabled via `mkfs.xfs -m crc=1 /dev/xxx` for v5 superblock. Transaction log space reservation improvements. With this change, we can calculate the log space reservation at mount time rather than runtime to reduce the the CPU overhead. User namespace support. So both XFS and USERNS can be enabled on kernel configuration begin from Linux 3.10. Thanks Dwight Engen's efforts for this thing. Split project/group quota inodes. Originally, project quota can not be enabled with group quota at the same time because they were share the same quota file inode, now it works but only for v5 super block. i.e, CRC enabled. CONFIG_XFS_WARN, an new lightweight runtime debugger which can be deployed in production environment. Readahead log object recovery, this change can speed up the log replay progress significantly. Speculative preallocation inode tracking, clearing and throttling. The main purpose is to deal with inodes with post-EOF space due to speculative preallocation, support improved quota management to free up a significant amount of unwritten space when at or near EDQUOT. It support backgroup scanning which occurs on a longish interval(5 mins by default, tunable), and on-demand scanning/trimming via ioctl(2). Bitter arguments ensued from this session, especially for the comparison between Ext4 and Btrfs in different areas, I have to spent a whole morning of the 1st day answering those questions. We basically agreed on XFS is the best choice in Linux nowadays because: Stable, XFS has a good record in stability in the past 10 years. Fengguang Wu who lead the 0-day kernel test project also said that he has observed less error than other filesystems in the past 1+ years, I own it to the XFS upstream code reviewer, they always performing serious code review as well as testing. Good performance for large/small files, XFS does not works very well for small files has already been an old story for years. Best choice (maybe) for distributed PB filesystems. e.g, Ceph recommends delopy OSD daemon on XFS because Ext4 has limited xattr size. Best choice for large storage (>16TB). Ext4 does not support a single file more than around 15.95TB. Scalability, any objection to XFS is best in this point? :) XFS is better to deal with transaction concurrency than Ext4, why? The maximum size of the log in XFS is 2038MB compare to 128MB in Ext4. Misc. Ext4 is widely used and it has been proved fast/stable in various loads and scenarios, XFS just need more customers, and Btrfs is still on the road to be a manhood. Ceph Introduction (Led by Li Wang) This a hot topic.  Li gave us a nice introduction about the design as well as their current works. Actually, Ceph client has been included in Linux kernel since 2.6.34 and supported by Openstack since Folsom but it seems that it has not yet been widely deployment in production environment. Their major work is focus on the inline data support to separate the metadata and data storage, reduce the file access time, i.e, a file access need communication twice, fetch the metadata from MDS and then get data from OSD, and also, the small file access is limited by the network latency. The solution is, for the small files they would like to store the data at metadata so that when accessing a small file, the metadata server can push both metadata and data to the client at the same time. In this way, they can reduce the overhead of calculating the data offset and save the communication to OSD. For this feature, they have only run some small scale testing but really saw noticeable improvements. Test environment: Intel 2 CPU 12 Core, 64GB RAM, Ubuntu 12.04, Ceph 0.56.6 with 200GB SATA disk, 15 OSD, 1 MDS, 1 MON. The sequence read performance for 1K size files improved about 50%. I have asked Li and Zheng Yan (the core developer of Ceph, who also worked on Btrfs) whether Ceph is really stable and can be deployed at production environment for large scale PB level storage, but they can not give a positive answer, looks Ceph even does not spread over Dreamhost (subject to confirmation). From Li, they only deployed Ceph for a small scale storage(32 nodes) although they'd like to try 6000 nodes in the future. Improve Linux swap for Flash storage (led by Shaohua Li) Because of high density, low power and low price, flash storage (SSD) is a good candidate to partially replace DRAM. A quick answer for this is using SSD as swap. But Linux swap is designed for slow hard disk storage, so there are a lot of challenges to efficiently use SSD for swap. SWAPOUT swap_map scan swap_map is the in-memory data structure to track swap disk usage, but it is a slow linear scan. It will become a bottleneck while finding many adjacent pages in the use of SSD. Shaohua Li have changed it to a cluster(128K) list, resulting in O(1) algorithm. However, this apporoach needs restrictive cluster alignment and only enabled for SSD. IO pattern In most cases, the swap io is in interleaved pattern because of mutiple reclaimers or a free cluster is shared by all reclaimers. Even though block layer can merge interleaved IO to some extent, but we cannot count on it completely. Hence the per-cpu cluster is added base on the previous change, it can help reclaimer do sequential IO and the block layer will be easier to merge IO. TLB flush: If we're reclaiming one active page, we should first move the page from active lru list to inactive lru list, and then reclaim the page from inactive lru to swap it out. During the process, we need to clear PTE twice: first is 'A'(ACCESS) bit, second is 'P'(PRESENT) bit. Processors need to send lots of ipi which make the TLB flush really expensive. Some works have been done to improve this, including rework smp_call_functiom_many() or remove the first TLB flush in x86, but there still have some arguments here and only parts of works have been pushed to mainline. SWAPIN: Page fault does iodepth=1 sync io, but it's a little waste if only issue a page size's IO. The obvious solution is doing swap readahead. But the current in-kernel swap readahead is arbitary(always 8 pages), and it always doesn't perform well for both random and sequential access workload. Shaohua introduced a new flag for madvise(MADV_WILLNEED) to do swap prefetch, so the changes happen in userspace API and leave the in-kernel readahead unchanged(but I think some improvement can also be done here). SWAP discard As we know, discard is important for SSD write throughout, but the current swap discard implementation is synchronous. He changed it to async discard which allow discard and write run in the same time. Meanwhile, the unit of discard is also optimized to cluster. Misc: lock contention For many concurrent swapout and swapin , the lock contention such as anon_vma or swap_lock is high, so he changed the swap_lock to a per-swap lock. But there still have some lock contention in very high speed SSD because of swapcache address_space lock. Zproject (led by Bob Liu) Bob gave us a very nice introduction about the current memory compression status. Now there are 3 projects(zswap/zram/zcache) which all aim at smooth swap IO storm and promote performance, but they all have their own pros and cons. ZSWAP It is implemented based on frontswap API and it uses a dynamic allocater named Zbud to allocate free pages. Zbud means pairs of zpages are "buddied" and it can only store at most two compressed pages in one page frame, so the max compress ratio is 50%. Each page frame is lru-linked and can do shink in memory pressure. If the compressed memory pool reach its limitation, shink or reclaim happens. It decompress the page frame into two new allocated pages and then write them to real swap device, but it can fail when allocating the two pages. ZRAM Acts as a compressed ramdisk and used as swap device, and it use zsmalloc as its allocator which has high density but may have fragmentation issues. Besides, page reclaim is hard since it will need more pages to uncompress and free just one page. ZRAM is preferred by embedded system which may not have any real swap device. Now both ZRAM and ZSWAP are in driver/staging tree, and in the mm community there are some disscussions of merging ZRAM into ZSWAP or viceversa, but no agreement yet. ZCACHE Handles file page compression but it is removed out of staging recently. From industry (led by Tang Jie, LSI) An LSI engineer introduced several new produces to us. The first is raid5/6 cards that it use full stripe writes to improve performance. The 2nd one he introduced is SandForce flash controller, who can understand data file types (data entropy) to reduce write amplification (WA) for nearly all writes. It's called DuraWrite and typical WA is 0.5. What's more, if enable its Dynamic Logical Capacity function module, the controller can do data compression which is transparent to upper layer. LSI testing shows that with this virtual capacity enables 1x TB drive can support up to 2x TB capacity, but the application must monitor free flash space to maintain optimal performance and to guard against free flash space exhaustion. He said the most useful application is for datebase. Another thing I think it's worth to mention is that a NV-DRAM memory in NMR/Raptor which is directly exposed to host system. Applications can directly access the NV-DRAM via a memory address - using standard system call mmap(). He said that it is very useful for database logging now. This kind of NVM produces are beginning to appear in recent years, and it is said that Samsung is building a research center in China for related produces. IMHO, NVM will bring an effect to current os layer especially on file system, e.g. its journaling may need to redesign to fully utilize these nonvolatile memory. OCFS2 (led by Canquan Shen) Without a doubt, HuaWei is the biggest contributor to OCFS2 in the past two years. They have posted 46 upstream patches and 39 patches have been merged. Their current project is based on 32/64 nodes cluster, but they also tried 128 nodes at the experimental stage. The major work they are working is to support ATS (atomic test and set), it can be works with DLM at the same time. Looks this idea is inspired by the vmware VMFS locking, i.e, http://blogs.vmware.com/vsphere/2012/05/vmfs-locking-uncovered.html CLK - 18th October 2013 Improving Linux Development with Better Tools (Andi Kleen) This talk focused on how to find/solve bugs along with the Linux complexity growing. Generally, we can do this with the following kind of tools: Static code checkers tools. e.g, sparse, smatch, coccinelle, clang checker, checkpatch, gcc -W/LTO, stanse. This can help check a lot of things, simple mistakes, complex problems, but the challenges are: some are very slow, false positives, may need a concentrated effort to get false positives down. Especially, no static checker I found can follow indirect calls (“OO in C”, common in kernel): struct foo_ops { int (*do_foo)(struct foo *obj); } foo->do_foo(foo); Dynamic runtime checkers, e.g, thread checkers, kmemcheck, lockdep. Ideally all kernel code would come with a test suite, then someone could run all the dynamic checkers. Fuzzers/test suites. e.g, Trinity is a great tool, it finds many bugs, but needs manual model for each syscall. Modern fuzzers around using automatic feedback, but notfor kernel yet: http://taviso.decsystem.org/making_software_dumber.pdf Debuggers/Tracers to understand code, e.g, ftrace, can dump on events/oops/custom triggers, but still too much overhead in many cases to run always during debug. Tools to read/understand source, e.g, grep/cscope work great for many cases, but do not understand indirect pointers (OO in C model used in kernel), give us all “do_foo” instances: struct foo_ops { int (*do_foo)(struct foo *obj); } = { .do_foo = my_foo }; foo>do_foo(foo); That would be great to have a cscope like tool that understands this based on types/initializers XFS: The High Performance Enterprise File System (Jeff Liu) [slides] I gave a talk for introducing the disk layout, unique features, as well as the recent changes.   The slides include some charts to reflect the performances between XFS/Btrfs/Ext4 for small files. About a dozen users raised their hands when I asking who has experienced with XFS. I remembered that when I asked the same question in LinuxCon/Japan, only 3 people raised their hands, but they are Chris Mason, Ric Wheeler, and another attendee. The attendee questions were mainly focused on stability, and comparison with other file systems. Linux Containers (Feng Gao) The speaker introduced us that the purpose for those kind of namespaces, include mount/UTS/IPC/Network/Pid/User, as well as the system API/ABI. For the userspace tools, He mainly focus on the Libvirt LXC rather than us(LXC). Libvirt LXC is another userspace container management tool, implemented as one type of libvirt driver, it can manage containers, create namespace, create private filesystem layout for container, Create devices for container and setup resources controller via cgroup. In this talk, Feng also mentioned another two possible new namespaces in the future, the 1st is the audit, but not sure if it should be assigned to user namespace or not. Another is about syslog, but the question is do we really need it? In-memory Compression (Bob Liu) Same as CLSF, a nice introduction that I have already mentioned above. Misc There were some other talks related to ACPI based memory hotplug, smart wake-affinity in scheduler etc., but my head is not big enough to record all those things. -- Jeff Liu

    Read the article

  • Recover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CD

    - by Trevor Bekolay
    Accidentally deleting a file is a terrible feeling. Not being able to boot into Windows and undelete that file makes that even worse. Fortunately, you can recover deleted files on NTFS hard drives from an Ubuntu Live CD. To show this process, we created four files on the desktop of a Windows XP machine, and then deleted them. We then booted up the same machine with the bootable Ubuntu 9.10 USB Flash Drive that we created last week. Once Ubuntu 9.10 boots up, open a terminal by clicking Applications in the top left of the screen, and then selecting Accessories > Terminal. To undelete our files, we first need to identify the hard drive that we want to undelete from. In the terminal window, type in: sudo fdisk –l and press enter. What you’re looking for is a line that ends with HPSF/NTFS (under the heading System). In our case, the device is “/dev/sda1”. This may be slightly different for you, but it will still begin with /dev/. Note this device name. If you have more than one hard drive partition formatted as NTFS, then you may be able to identify the correct partition by the size. If you look at the second line of text in the screenshot above, it reads “Disk /dev/sda: 136.4 GB, …” This means that the hard drive that Ubuntu has named /dev/sda is 136.4 GB large. If your hard drives are of different size, then this information can help you track down the right device name to use. Alternatively, you can just try them all, though this can be time consuming for large hard drives. Now that you know the name Ubuntu has assigned to your hard drive, we’ll scan it to see what files we can uncover. In the terminal window, type: sudo ntfsundelete <HD name> and hit enter. In our case, the command is: sudo ntfsundelete /dev/sda1 The names of files that can recovered show up in the far right column. The percentage in the third column tells us how much of that file can be recovered. Three of the four files that we originally deleted are showing up in this list, even though we shut down the computer right after deleting the four files – so even in ideal cases, your files may not be recoverable. Nevertheless, we have three files that we can recover – two JPGs and an MPG. Note: ntfsundelete is immediately available in the Ubuntu 9.10 Live CD. If you are in a different version of Ubuntu, or for some other reason get an error when trying to use ntfsundelete, you can install it by entering “sudo apt-get install ntfsprogs” in a terminal window. To quickly recover the two JPGs, we will use the * wildcard to recover all of the files that end with .jpg. In the terminal window, enter sudo ntfsundelete <HD name> –u –m *.jpg which is, in our case, sudo ntfsundelete /dev/sda1 –u –m *.jpg The two files are recovered from the NTFS hard drive and saved in the current working directory of the terminal. By default, this is the home directory of the current user, though we are working in the Desktop folder. Note that the ntfsundelete program does not make any changes to the original NTFS hard drive. If you want to take those files and put them back in the NTFS hard drive, you will have to move them there after they are undeleted with ntfsundelete. Of course, you can also put them on your flash drive or open Firefox and email them to yourself – the sky’s the limit! We have one more file to undelete – our MPG. Note the first column on the far left. It contains a number, its Inode. Think of this as the file’s unique identifier. Note this number. To undelete a file by its Inode, enter the following in the terminal: sudo ntfsundelete <HD name> –u –i <Inode> In our case, this is: sudo ntfsundelete /dev/sda1 –u –i 14159 This recovers the file, along with an identifier that we don’t really care about. All three of our recoverable files are now recovered. However, Ubuntu lets us know visually that we can’t use these files yet. That’s because the ntfsundelete program saves the files as the “root” user, not the “ubuntu” user. We can verify this by typing the following in our terminal window: ls –l We want these three files to be owned by ubuntu, not root. To do this, enter the following in the terminal window: sudo chown ubuntu <Files> If the current folder has other files in it, you may not want to change their owner to ubuntu. However, in our case, we only have these three files in this folder, so we will use the * wildcard to change the owner of all three files. sudo chown ubuntu * The files now look normal, and we can do whatever we want with them. Hopefully you won’t need to use this tip, but if you do, ntfsundelete is a nice command-line utility. It doesn’t have a fancy GUI like many of the similar Windows programs, but it is a powerful tool that can recover your files quickly. See ntfsundelete’s manual page for more detailed usage information Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDUse Ubuntu Live CD to Backup Files from Your Dead Windows ComputerCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy WayGuide to Using Check Disk in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Windows 7 Easter Theme YoWindoW, a real time weather screensaver Optimize your computer the Microsoft way Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper

    Read the article

  • Django app that can provide user friendly, multiple / mass file upload functionality to other apps

    - by hopla
    Hi, I'm going to be honest: this is a question I asked on the Django-Users mailinglist last week. Since I didn't get any replies there yet, I'm reposting it on Stack Overflow in the hope that it gets more attention here. I want to create an app that makes it easy to do user friendly, multiple / mass file upload in your own apps. With user friendly I mean upload like Gmail, Flickr, ... where the user can select multiple files at once in the browse file dialog. The files are then uploaded sequentially or in parallel and a nice overview of the selected files is shown on the page with a progress bar next to them. A 'Cancel' upload button is also a possible option. All that niceness is usually solved by using a Flash object. Complete solutions are out there for the client side, like: SWFUpload http://swfupload.org/ , FancyUpload http://digitarald.de/project/fancyupload/ , YUI 2 Uploader http://developer.yahoo.com/yui/uploader/ and probably many more. Ofcourse the trick is getting those solutions integrated in your project. Especially in a framework like Django, double so if you want it to be reusable. So, I have a few ideas, but I'm neither an expert on Django nor on Flash based upload solutions. I'll share my ideas here in the hope of getting some feedback from more knowledgeable and experienced people. (Or even just some 'I want this too!' replies :) ) You will notice that I make a few assumptions: this is to keep the (initial) scope of the application under control. These assumptions are of course debatable: All right, my idea's so far: If you want to mass upload multiple files, you are going to have a model to contain each file in. I.e. the model will contain one FileField or one ImageField. Models with multiple (but ofcourse finite) amount of FileFields/ ImageFields are not in need of easy mass uploading imho: if you have a model with 100 FileFields you are doing something wrong :) Examples where you would want my envisioned kind of mass upload: An app that has just one model 'Brochure' with a file field, a title field (dynamically created from the filename) and a date_added field. A photo gallery app with models 'Gallery' and 'Photo'. You pick a Gallery to add pictures to, upload the pictures and new Photo objects are created and foreign keys set to the chosen Gallery. It would be nice to be able to configure or extend the app for your favorite Flash upload solution. We can pick one of the three above as a default, but implement the app so that people can easily add additional implementations (kinda like Django can use multiple databases). Let it be agnostic to any particular client side solution. If we need to pick one to start with, maybe pick the one with the smallest footprint? (smallest download of client side stuff) The Flash based solutions asynchronously (and either sequentially or in parallel) POST the files to a url. I suggest that url to be local to our generic app (so it's the same for every app where you use our app in). That url will go to a view provided by our generic app. The view will do the following: create a new model instance, add the file, OPTIONALLY DO EXTRA STUFF and save the instance. DO EXTRA STUFF is code that the app that uses our app wants to run. It doesn't have to provide any extra code, if the model has just a FileField/ImageField the standard view code will do the job. But most app will want to do extra stuff I think, like filling in the other fields: title, date_added, foreignkeys, manytomany, ... I have not yet thought about a mechanism for DO EXTRA STUFF. Just wrapping the generic app view came to mind, but that is not developer friendly, since you would have to write your own url pattern and your own view. Then you have to tell the Flash solutions to use a new url etc... I think something like signals could be used here? Forms/Admin: I'm still very sketchy on how all this could best be integrated in the Admin or generic Django forms/widgets/... (and this is were my lack of Django experience shows): In the case of the Gallery/Photo app: You could provide a mass Photo upload widget on the Gallery detail form. But what if the Gallery instance is not saved yet? The file upload view won't be able to set the foreignkeys on the Photo instances. I see that the auth app, when you create a user, first asks for username and password and only then provides you with a bigger form to fill in emailadres, pick roles etc. We could do something like that. In the case of an app with just one model: How do you provide a form in the Django admin to do your mass upload? You can't do it with the detail form of your model, that's just for one model instance. There's probably dozens more questions that need to be answered before I can even start on this app. So please tell me what you think! Give me input! What do you like? What not? What would you do different? Is this idea solid? Where is it not? Thank you!

    Read the article

  • Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4

    - by Brian
    Oracle's SPARC T4-4 server running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved a world record 18,000 concurrent users experiencing subsecond response time while executing a PeopleSoft Payroll batch job of 500,000 employees in 32.4 minutes. This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier. The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment. The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 47% (online and batch) leaving significant headroom for additional processing across the three tiers. The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices. Performance Landscape Results are presented for the PeopleSoft HRMS Self-Service and Payroll combined benchmark. The new result with 128 streams shows significant improvement in the payroll batch processing time with little impact on the self-service component response time. PeopleSoft HRMS Self-Service and Payroll Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.988 0.539 32.4 128 SPARC T4-2 (web) SPARC T4-4 (app) SPARC T4-4 (db) 18,000 0.944 0.503 43.3 64 The following results are for the PeopleSoft HRMS Self-Service benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the payroll component. PeopleSoft HRMS Self-Service 9.1 Benchmark Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-2 (web) SPARC T4-4 (app) 2x SPARC T4-2 (db) 18,000 1.048 0.742 N/A N/A The following results are for the PeopleSoft Payroll benchmark that was previous run. The results are not directly comparable with the combined results because they do not include the self-service component. PeopleSoft Payroll (N.A.) 9.1 - 500K Employees (7 Million SQL PayCalc, Unicode) Systems Users Ave Response Search (sec) Ave Response Save (sec) Batch Time (min) Streams SPARC T4-4 (db) N/A N/A N/A 30.84 96 Configuration Summary Application Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 512 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 PeopleSoft HCM 9.1 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Java Platform, Standard Edition Development Kit 6 Update 32 Database Configuration: 1 x SPARC T4-4 server with 4 x SPARC T4 processors, 3.0 GHz 256 GB memory Oracle Solaris 11 11/11 Oracle Database 11g Release 2 PeopleTools 8.52 Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031 Micro Focus Server Express (COBOL v 5.1.00) Web Tier Configuration: 1 x SPARC T4-2 server with 2 x SPARC T4 processors, 2.85 GHz 256 GB memory Oracle Solaris 11 11/11 PeopleTools 8.52 Oracle WebLogic Server 10.3.4 Java Platform, Standard Edition Development Kit 6 Update 32 Storage Configuration: 1 x Sun Server X2-4 as a COMSTAR head for data 4 x Intel Xeon X7550, 2.0 GHz 128 GB memory 1 x Sun Storage F5100 Flash Array (80 flash modules) 1 x Sun Storage F5100 Flash Array (40 flash modules) 1 x Sun Fire X4275 as a COMSTAR head for redo logs 12 x 2 TB SAS disks with Niwot Raid controller Benchmark Description This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2. The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published. PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions. All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions. The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes. The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state. Key Points and Best Practices Two PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning. Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads. A total of 128 PeopleSoft streams server processes where used on the database node to complete payroll batch job of 500,000 employees in 32.4 minutes. See Also Oracle PeopleSoft Benchmark White Papers oracle.com SPARC T4-2 Server oracle.com OTN SPARC T4-4 Server oracle.com OTN PeopleSoft Enterprise Human Capital Managementoracle.com OTN PeopleSoft Enterprise Human Capital Management (Payroll) oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 8 November 2012.

    Read the article

  • plupload variable upload path?

    - by SoulieBaby
    Hi all, I'm using plupload to upload files to my server (http://www.plupload.com/index.php), however I wanted to know if there was any way of making the upload path variable. Basically I need to select the upload path folder first, then choose the files using plupload and then upload to the initially selected folder. I've tried a few different ways but I can't seem to pass along the variable folder path to the upload.php file. I'm using the flash version of plupload. If someone could help me out, that would be fantastic!! :) Here's my plupload jquery: jQuery.noConflict(); jQuery(document).ready(function() { jQuery("#flash_uploader").pluploadQueue({ // General settings runtimes: 'flash', url: '/assets/upload/upload.php', max_file_size: '10mb', chunk_size: '1mb', unique_names: false, // Resize images on clientside if we can resize: {width: 500, height: 350, quality: 100}, // Flash settings flash_swf_url: '/assets/upload/flash/plupload.flash.swf' }); }); And here's the upload.php file: <?php /** * upload.php * * Copyright 2009, Moxiecode Systems AB * Released under GPL License. * * License: http://www.plupload.com/license * Contributing: http://www.plupload.com/contributing */ // HTTP headers for no cache etc header('Content-type: text/plain; charset=UTF-8'); header("Expires: Mon, 26 Jul 1997 05:00:00 GMT"); header("Last-Modified: ".gmdate("D, d M Y H:i:s")." GMT"); header("Cache-Control: no-store, no-cache, must-revalidate"); header("Cache-Control: post-check=0, pre-check=0", false); header("Pragma: no-cache"); // Settings $targetDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads"; //temp directory <- need these to be variable $finalDir = $_SERVER['DOCUMENT_ROOT']."/tmp/uploads2"; //final directory <- need these to be variable $cleanupTargetDir = true; // Remove old files $maxFileAge = 60 * 60; // Temp file age in seconds // 5 minutes execution time @set_time_limit(5 * 60); // usleep(5000); // Get parameters $chunk = isset($_REQUEST["chunk"]) ? $_REQUEST["chunk"] : 0; $chunks = isset($_REQUEST["chunks"]) ? $_REQUEST["chunks"] : 0; $fileName = isset($_REQUEST["name"]) ? $_REQUEST["name"] : ''; // Clean the fileName for security reasons $fileName = preg_replace('/[^\w\._]+/', '', $fileName); // Create target dir if (!file_exists($targetDir)) @mkdir($targetDir); // Remove old temp files if (is_dir($targetDir) && ($dir = opendir($targetDir))) { while (($file = readdir($dir)) !== false) { $filePath = $targetDir . DIRECTORY_SEPARATOR . $file; // Remove temp files if they are older than the max age if (preg_match('/\\.tmp$/', $file) && (filemtime($filePath) < time() - $maxFileAge)) @unlink($filePath); } closedir($dir); } else die('{"jsonrpc" : "2.0", "error" : {"code": 100, "message": "Failed to open temp directory."}, "id" : "id"}'); // Look for the content type header if (isset($_SERVER["HTTP_CONTENT_TYPE"])) $contentType = $_SERVER["HTTP_CONTENT_TYPE"]; if (isset($_SERVER["CONTENT_TYPE"])) $contentType = $_SERVER["CONTENT_TYPE"]; if (strpos($contentType, "multipart") !== false) { if (isset($_FILES['file']['tmp_name']) && is_uploaded_file($_FILES['file']['tmp_name'])) { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen($_FILES['file']['tmp_name'], "rb"); if ($in) { while ($buff = fread($in, 4096)) fwrite($out, $buff); } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); unlink($_FILES['file']['tmp_name']); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } else die('{"jsonrpc" : "2.0", "error" : {"code": 103, "message": "Failed to move uploaded file."}, "id" : "id"}'); } else { // Open temp file $out = fopen($targetDir . DIRECTORY_SEPARATOR . $fileName, $chunk == 0 ? "wb" : "ab"); if ($out) { // Read binary input stream and append it to temp file $in = fopen("php://input", "rb"); if ($in) { while ($buff = fread($in, 4096)){ fwrite($out, $buff); } } else die('{"jsonrpc" : "2.0", "error" : {"code": 101, "message": "Failed to open input stream."}, "id" : "id"}'); fclose($out); } else die('{"jsonrpc" : "2.0", "error" : {"code": 102, "message": "Failed to open output stream."}, "id" : "id"}'); } //Moves the file from $targetDir to $finalDir after receiving the final chunk if($chunk == ($chunks-1)){ rename($targetDir . DIRECTORY_SEPARATOR . $fileName, $finalDir . DIRECTORY_SEPARATOR . $fileName); } // Return JSON-RPC response die('{"jsonrpc" : "2.0", "result" : null, "id" : "id"}'); ?>

    Read the article

  • action script xml parsing failed.

    - by sabuj
    TypeError: Error #1088: The markup in the document following the root element must be well-formed. action script read xml from a codeigniter controller function but it shows the upper error. what should be done. bellow is my full coding structure - import fl.transitions.easing.; import fl.transitions.; var xml:XML; var xmlList:XMLList; var xmlLoader:URLLoader = new URLLoader (); var productLoader:URLLoader = new URLLoader(); var imageLoader:Loader; var bigImage:Loader = new Loader(); var imageText:TextField = new TextField(); xmlLoader.load(new URLRequest("http://outshinebd.com/shopmajik/flash/shopdata")); xmlLoader.addEventListener(Event.COMPLETE, xmlLoaded); function xmlLoaded(e:Event):void { xml=new XML(e.target.data); xmlList = xml.children(); imageLoader = new Loader(); imageLoader.load(new URLRequest(xmlList.elements("shopbanner").attribute("src"))); imageLoader.x = -220; imageLoader.y = -187; addChild(imageLoader); } imgLoader.alpha = 0; textLoader.alpha = 0; imgLoader2.alpha = 0; imgLoader3.alpha = 0; imgLoader4.alpha = 0; imgLoader5.alpha = 0; //Item button is working here... btnProduct.addEventListener(MouseEvent.CLICK, showProduct); function showProduct(event:Event):void { imgLoader.alpha = 0; textLoader.alpha = 0; imgLoader3.alpha = 0; imgLoader4.alpha = 0; imgLoader5.alpha = 0; imgLoader2.alpha = 0; var productLoader:URLLoader = new URLLoader(); productLoader.load(new URLRequest("http://outshinebd.com/shopmajik/flash/productdata")); productLoader.addEventListener(Event.COMPLETE , onProductLoad); function onProductLoad(e:Event):void { var productLoader:XML = new XML(e.target.data); xmlList = productLoader.children(); textLoader.text = xmlList.elements("productname"); textLoader.alpha =0; } } //Item button is working here... btnItem.addEventListener(MouseEvent.CLICK, showItem); //btnItem.addEventListener(Event:event, showItem); function showItem(event:Event):void { imgLoader.alpha =0; textLoader.alpha=0; imgLoader3.alpha=0; imgLoader4.alpha=0; imgLoader5.alpha=0; imgLoader2.alpha=0; var itemLoader:URLLoader = new URLLoader(); itemLoader.load(new URLRequest("http://outshinebd.com/shopmajik/flash/productdata")); itemLoader.addEventListener(Event.COMPLETE , onItemLoad); function onItemLoad(e:Event):void { var myLoader:XML = new XML(e.target.data); xmlList = myLoader.children(); textLoader.text = xmlList.elements("productname"); textLoader.alpha =0; } } //Details button workings... btnDetails.addEventListener(MouseEvent.CLICK, showDetails); function showDetails(event:Event):void { imgLoader.alpha=0; textLoader.alpha=0; imgLoader3.alpha=0; imgLoader4.alpha=0; imgLoader5.alpha=0; imgLoader2.alpha=0; var detailsLoader:URLLoader = new URLLoader(); detailsLoader.load(new URLRequest("http://outshinebd.com/shopmajik/flash/productdata")); detailsLoader.addEventListener(Event.COMPLETE , onDetailsLoad); function onDetailsLoad(e:Event):void { var myLoader:XML = new XML(e.target.data); xmlList = myLoader.children(); textLoader.text = xmlList.elements("productdescription"); textLoader.alpha =0; } } btnImages.addEventListener(MouseEvent.CLICK, showImages); function showImages(event:Event):void { textLoader.alpha=0; textLoader.text=""; imgLoader2.alpha= 1; var imagesLoader:URLLoader = new URLLoader(); imagesLoader.load(new URLRequest("http://outshinebd.com/shopmajik/flash/productdata")); imagesLoader.addEventListener(Event.COMPLETE , onImagesLoad); function onImagesLoad(e:Event):void { var xml:XML = new XML(e.target.data); xmlList = xml.children(); imgLoader2.x=-155; imgLoader2.y= -50; imgLoader2.load(new URLRequest(xmlList.elements("productimage").attribute("src"))); } } btnPay.addEventListener(MouseEvent.CLICK , showPage); function showPage(e:MouseEvent):void { navigateToURL(new URLRequest("https://www.paypal.com/")); }

    Read the article

  • Adobe Photoshop Vs Lightroom Vs Aperture

    - by Aditi
    Adobe Photoshop is the standard choice for photographers, graphic artists and Web designers. Adobe Photoshop Lightroom  & Apple’s Aperture are also in the same league but the usage is vastly different. Although Photoshop is most popular & widely used by photographers, but in many ways it’s less relevant to photographers than ever before. As Lightroom & Aperture is aimed squarely at photographers for photo-processing. With this write up we are going to help you choose what is right for you and why. Adobe Photoshop Adobe Photoshop is the most liked tool for the detailed photo editing & designing work. Photoshop provides great features for rollover and Image slicing. Adobe Photoshop includes comprehensive optimization features for producing the highest quality Web graphics with the smallest possible file sizes. You can also create startling animations with it. Designers & Editors know how important precise masking is, PhotoShop lets you do that with various detailing tools. Art history brush, contact sheets, and history palette are some of the smart features, which add to its viability. Download Whether you’re producing printed pages or moving images, you can work more efficiently and produce better results because of its smooth integration across other adobe applications. Buy supporting layer effects, it allows you to quickly add drop shadows, inner and outer glows, bevels, and embossing to layers. It also provides Seamless Web Graphics Workflow. Photoshop is hands-down the BEST for editing. Photoshop Cons: • Slower, less precise editing features in Bridge • Processing lots of images requires actions and can be slower than exporting images from Lightroom • Much slower with editing and processing a large number of images Aperture Apple Aperture is aimed at the professional photographer who shoots predominantly raw files. It helps them to manage their workflow and perform their initial Raw conversion in a better way. Aperture provides adjustment tools such as Histogram to modify color and white balance, but most of the editing of photos is left for Photoshop. It gives users the option of seeing their photographs laid out like slides or negatives on a light table. It boasts of – stars, color-coding and easy techniques for filtering and picking images. Aperture has moved forward few steps than Photoshop, but most of the editing work has been left for Photoshop as it features seamless Photoshop integration. Aperture Pros: Aperture is a step up from the iPhoto software that comes with every Mac, and fairly easy to learn. Adjustments are made in a logical order from top to bottom of the menu. You can store the images in a library or any folder you choose. Aperture also works really well with direct Canon files. It is just $79 if you buy it through Apple’s App Store Moving forward, it will run on the iPad, and possibly the iPhone – Adobe products like Lightroom and Photoshop may never offer these options It is much nicer and simpler user interface. Lightroom Lightroom does a smashing job of basic fixing and editing. It is more advanced tool for photographers. They can use it to have a startling photography effect. Light room has many advanced features, which makes it one of the best tools for photographers and far ahead of the other two. They are Nondestructive editing. Nothing is actually changed in an image until the photo is exported. Better controls over organizing your photos. Lightroom helps to gather a group of photos to use in a slideshow. Lightroom has larger Compare and Survey views of images. Quickly customizable interface. Simple keystrokes allow you to perform different All Lightroom controls are kept available in panels right next to the photos. Always-available History palette, it doesn’t go when you close lightroom. You gain more colors to work with compared to Photoshop and with more precise control. Local control, or adjusting small parts of a photo without affecting anything else, has long been an important part of photography. In Lightroom 2, you can darken, lighten, and affect color and change sharpness and other aspects of specific areas in the photo simply by brushing your cursor across the areas. Photoshop has far more power in its Cloning and Healing Brush tools than Lightroom, but Lightroom offers simple cloning and healing that’s nondestructive. Lightroom supports the RAW formats of more cameras than Aperture. Lightroom provides the option of storing images outside the application in the file system. It costs less than photoshop. Download Why PhotoShop is advanced than Lightroom? There are countless image processing plug-ins on the market for doing specialized processing in Photoshop. For example, if your image needs sophisticated noise reduction, you can use the Noiseware plug-in with Photoshop to do a much better job or noise removal than Lightroom can do. Lightroom’s advantages over Aperture 3 Will always have better integration with Photoshop. Lightroom is backed by bigger and more active user community (So abundant availability for tutorials, etc.) Better noise reduction tool. Especially for photographers the Lens-distortion correction tool  is perfect Lightroom Cons: • Have to Import images to work on them • Slows down with over 10,000 images in the catalog • For processing just one or two images this is a slower workflow Photoshop Pros: • ACR has the same RAW processing controls as Lightroom • ACR Histogram is specialized to the chosen color space (Lightroom is locked into ProPhoto RGB color space with an sRGB tone curve) • Don’t have to Import images to open in Bridge or ACR • Ability to customize processing of RAW images with Photoshop Actions Pricing and Availability Get LightRoomGet PhotoShop Latest version Of Photoshop can be purchased from Adobe store and Adobe authorized reseller and it costs US$999. Latest version of Aperture can be bought for US$199 from Apple Online store or Mac App Store. You can buy latest version of LightRoom from Adobe Store or Adobe Authorized reseller for US$299. Related posts:Adobe Photoshop CS5 vs Photoshop CS5 extended Web based Alternatives to Photoshop 10 Free Alternatives for Adobe Photoshop Software

    Read the article

  • Boot From a USB Drive Even if your BIOS Won’t Let You

    - by Trevor Bekolay
    You’ve always got a trusty bootable USB flash drive with you to solve computer problems, but what if a PC’s BIOS won’t let you boot from USB? We’ll show you how to make a CD or floppy disk that will let you boot from your USB drive. This boot menu, like many created before USB drives became cheap and commonplace, does not include an option to boot from a USB drive. A piece of freeware called PLoP Boot Manager solves this problem, offering an image that can burned to a CD or put on a floppy disk, and enables you to boot to a variety of devices, including USB drives. Put PLoP on a CD PLoP comes as a zip file, which includes a variety of files. To put PLoP on a CD, you will need either plpbt.iso or plpbtnoemul.iso from that zip file. Either disc image should work on most computers, though if in doubt plpbtnoemul.iso should work “everywhere,” according to the readme included with PLoP Boot Manager. Burn plpbtnoemul.iso or plpbt.iso to a CD and then skip to the “booting PLoP Boot Manager” section. Put PLoP on a Floppy Disk If your computer is old enough to still have a floppy drive, then you will need to put the contents of the plpbt.img image file found in PLoP’s zip file on a floppy disk. To do this, we’ll use a freeware utility called RawWrite for Windows. We aren’t fortunate enough to have a floppy drive installed, but if you do it should be listed in the Floppy drive drop-down box. Select your floppy drive, then click on the “…” button and browse to plpbt.img. Press the Write button to write PLoP boot manager to your floppy disk. Booting PLoP Boot Manager To boot PLoP, you will need to have your CD or floppy drive boot with higher precedence than your hard drive. In many cases, especially with floppy disks, this is done by default. If the CD or floppy drive is not set to boot first, then you will need to access your BIOS’s boot menu, or the setup menu. The exact steps to do this vary depending on your BIOS – to get a detailed description of the process, search for your motherboard’s manual (or your laptop’s manual if you’re working with a laptop). In general, however, as the computer boots up, some important keyboard strokes are noted somewhere prominent on the screen. In our case, they are at the bottom of the screen. Press Escape to bring up the Boot Menu. Previously, we burned a CD with PLoP Boot Manager on it, so we will select the CD-ROM Drive option and hit Enter. If your BIOS does not have a Boot Menu, then you will need to access the Setup menu and change the boot order to give the floppy disk or CD-ROM Drive higher precedence than the hard drive. Usually this setting is found in the “Boot” or “Advanced” section of the Setup menu. If done correctly, PLoP Boot Manager will load up, giving a number of boot options. Highlight USB and press Enter. PLoP begins loading from the USB drive. Despite our BIOS not having the option, we’re now booting using the USB drive, which in our case holds an Ubuntu Live CD! This is a pretty geeky way to get your PC to boot from a USB…provided your computer still has a floppy drive. Of course if your BIOS won’t boot from a USB it probably has one…or you really need to update it. Download PLoP Boot Manager Download RawWrite for Windows Similar Articles Productive Geek Tips Create a Bootable Ubuntu 9.10 USB Flash DriveReinstall Ubuntu Grub Bootloader After Windows Wipes it OutCreate a Bootable Ubuntu USB Flash Drive the Easy WayBuilding a New Computer – Part 3: Setting it UpInstall Windows XP on Your Pre-Installed Windows Vista Computer TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Oracle’s New Memory-Optimized x86 Servers: Getting the Most Out of Oracle Database In-Memory

    - by Josh Rosen, x86 Product Manager-Oracle
    With the launch of Oracle Database In-Memory, it is now possible to perform real-time analytics operations on your business data as it exists at that moment – in the DRAM of the server – and immediately return completely current and consistent data. The Oracle Database In-Memory option dramatically accelerates the performance of analytics queries by storing data in a highly optimized columnar in-memory format.  This is a truly exciting advance in database technology.As Larry Ellison mentioned in his recent webcast about Oracle Database In-Memory, queries run 100 times faster simply by throwing a switch.  But in order to get the most from the Oracle Database In-Memory option, the underlying server must also be memory-optimized. This week Oracle announced new 4-socket and 8-socket x86 servers, the Sun Server X4-4 and Sun Server X4-8, both of which have been designed specifically for Oracle Database In-Memory.  These new servers use the fastest Intel® Xeon® E7 v2 processors and each subsystem has been designed to be the best for Oracle Database, from the memory, I/O and flash technologies right down to the system firmware.Amongst these subsystems, one of the most important aspects we have optimized with the Sun Server X4-4 and Sun Server X4-8 are their memory subsystems.  The new In-Memory option makes it possible to select which parts of the database should be memory optimized.  You can choose to put a single column or table in memory or, if you can, put the whole database in memory.  The more, the better.  With 3 TB and 6 TB total memory capacity on the Sun Server X4-4 and Sun Server X4-8, respectively, you can memory-optimize more, if not your entire database.   Sun Server X4-8 CMOD with 24 DIMM slots per socket (up to 192 DIMM slots per server) But memory capacity is not the only important factor in selecting the best server platform for Oracle Database In-Memory.  As you put more of your database in memory, a critical performance metric known as memory bandwidth comes into play.  The total memory bandwidth for the server will dictate the rate in which data can be stored and retrieved from memory.  In order to achieve real-time analysis of your data using Oracle Database In-Memory, even under heavy load, the server must be able to handle extreme memory workloads.  With that in mind, the Sun Server X4-8 was designed with the maximum possible memory bandwidth, providing over a terabyte per second of total memory bandwidth.  Likewise, the Sun Server X4-4 also provides extreme memory bandwidth in an even more compact form factor with over half a terabyte per second, providing customers with scalability and choice depending on the size of the database.Beyond the memory subsystem, Oracle’s Sun Server X4-4 and Sun Server X4-8 systems provide other key technologies that enable Oracle Database to run at its best.  The Sun Server X4-4 allows for up 4.8 TB of internal, write-optimized PCIe flash while the Sun Server X4-8 allows for up to 6.4 TB of PCIe flash.  This enables dramatic acceleration of data inserts and updates to Oracle Database.  And with the new elastic computing capability of Oracle’s new x86 servers, server performance can be adapted to your specific Oracle Database workload to ensure that every last bit of processing power is utilized.Because Oracle designs and tests its x86 servers specifically for Oracle workloads, we provide the highest possible performance and reliability when running Oracle Database.  To learn more about Sun Server X4-4 and Sun Server X4-8, you can find more details including data sheets and white papers here. Josh Rosen is a Principal Product Manager for Oracle’s x86 servers, focusing on Oracle’s operating systems and software.  He previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers. 

    Read the article

  • Own an iPad, iPhone, or Touch? Add GWB to your Home

    - by Staff of Geeks
    I know it is goofy to geek out about such things, but I am a pretty goofy geek.  Today we published support for the Apple icons for devices like the iPhone, iPad, or iPod Touch.  Now when you add Geekswithblogs.net to your home screen (via the little + in Safari) it will load the icon to the left instead of a screenshot of the site.  We have been using my iPhone and iPad to surf to GWB for awhile and we were tired of that screenshot.  Simple Google/Bing search and a little Photoshop CS5 (Amazing!!!) work and there it is.  Next step we are taking in the Apple iPad/Phone arena is getting our developer kit and seeing what possiblities there are for Geekswithblogs.net and a custom App.  Got any ideas? Here is a shot of my iPad 3G: Also, would anyone be interested in a Geekswithblogs.net custom background for the iPhone or iPad?  I can put something together real quick if there are any takers.

    Read the article

  • Booting the liveCD/USB in EFI mode fails on Samsung Tablet XE700T1A

    - by F.L.
    My tablet is Samsung Series 7 Slate (XE700T1A-A02FR (French Language)). It operates an Intel Sandy Bridge architecture. The main issue about this tablet is that it ships with an installed Windows 7 in (U)EFI mode (GPT partition table, etc.), so I'd like to get an EFI dual boot with Ubuntu. But it seems I can't boot on the liveCD in EFI mode. It starts loading (up to initrd), but I then get a blank (black) screen. I've tried the nomodeset kernel option (as well as removing quiet and splash) with no luck. [2012-09-27] I have used the Ubuntu 12.04.1 Desktop ISO (I have read somewhere that it is the only one that can boot in EFI mode). I'd say this has something to do with UEFI since the LiveCD boots in bios mode but not in efi mode. Besides, I am not sure my boot info will help, since I can't boot the LiveCD in EFI mode. As a result I can't install ubuntu in EFI mode. So it would be the boot info from the liveCD boot in bios mode. This happens on a ubuntu-12.04.1-desktop-amd64 iso used on a LiveUSB. Live USB was created by dd'ing the iso onto the full disk device (i.e. /dev/sdx no number) of the Flash drive. I have also tried copying the LiveCD files on a primary GPT partition, but with no luck, I just get the grub shell, no menu, no install option. [2012-09-28] I tried today a flash drive created with Ubuntu's Startup Disk Creator and the alternate 12.04.1 64 bit ISO. I get a grub menu in text mode (which meens it did start in efi mode) with install options / test options. But when I start any of these, I simply get a black screen (no cursor, neither mouse nor text-mode cursor). I tried removing the 'quiet' option and adding nomodeset and acpi=off, but it didn't do any good. So this is the same result as for the LiveCD. [2012-10-01] I have tried with a version of the secure remix version via usb-creator-gtk. The boot on the USB key has the same symptoms. Boot in EFI mode is impossible (I have menu but whatever entry I choose, I get the blank screen problem). The boot in BIOS mode works, I did the install. Then I used boot-repair to try installing grub-efi and get a system that would boot in efi mode. But I can't boot this system, because the EFI firmware doesn't seem to detect that sda contains a valid efi partition. Here is the resulting boot-info Boot info 1253554 [2012-10-01] Today, I have reinstalled the pre-shipped version of windows 7, and then installed ubuntu from a secure-remix iso dumped on USB flash drive vie usb-creator-gtk booted in BIOS mode. When install ended, I said "continue testing" then I used boot-repair to try get the bootloader installed. Now, when I boot the tablet, I get the grub menu, it can chainload windows 7 flawlessly. But when I try to start one of the ubuntu options I get the same old blank screen. Here is the new boot-info: Boot info 1253927 [2012-10-01] I tried installing the 3.3 kernel by chrooting a live usb boot (secure remix again) into the installed system. Same symptoms. I feel the key to this is that the device's efi firmware (which is EFI v2.0) would expose the graphics hardware in a way that prevents the kernel to initialize it, and thus prevents it from booting (the kernel stops all drive access just after the screen turns kind of very dark purple). Here is some info on the UEFI firmware as given by rEFInd: EFI revision: 2.00 Platform: x86_64 (64 bit) Firmware: American Megatrends 4.635 Screen Output: Graphics Output (UEFI), 800x600 [2012-10-08] This week end I tried loading the kernel with elilo. Eventhough I didn't have more luck on booting the kernel, elilo gives more info when loading the kernel. I think the next step is trying to load a kernel with EFI stub directly.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >