Search Results

Search found 4376 results on 176 pages for 'palm pre'.

Page 49/176 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Improve Playback Using Enhancements in Windows Media Player 12

    - by DigitalGeekery
    Are you looking for ways to improve the playback of your media in Windows Media Player 12? We’ll show you how to do that by using the enhancements in WMP 12. If you are in Library mode, you’ll need to click the icon at the lower right to switch to Now Playing mode. Right-click anywhere in Media Player while in Now Playing mode, select Enhancements, and select any of the available options.   You can switch between the individual enhancements by clicking the right and left buttons at the top left.   Crossfading and Auto Volume Leveling The Auto Volume Leveling setting is just a simple toggle on and off. If your MP3 or WMA files have volume leveling information values.   You can automatically add volume leveling information values to all files you add to your library by switching to Library view, going to Tools > Options, and selecting Add volume leveling information values for new files on the Library tab. Click OK when finished.   Crossfading will gradually decrease the volume of the song that is ending (fade out) and increase volume of the song that is beginning. Click Turn on Crossfading and then click and drag the slider left or right change the amount of overlap between tracks. Graphic Equalizer The graphic equalizer is toggled on and off by clicking Turn on / Turn off at the top left. You can select pre-defined equalizer settings by music genre by clicking the Default list. The radio buttons on the left allow you to move the sliders individually, in a loose group or a tight group. You can always return to the default settings by clicking Reset. Play Speed Settings Choose a pre-defined settings by clicking Slow, Normal, or Fast. Uncheck the Snap slider to common speeds the move the slider right and left to your desired speed. If nothing else, these settings provide a little fun and amusement. Quiet Mode Quiet mode will level out any sharp volume highs and lows within a single track. Simply toggle the setting on or off and select whether you prefer Medium difference or Little difference by selecting one of the radio buttons. SRS WOW effects SRS WOW effects enhance low-frequency and stereo sound performance. Click Turn on to enable the TruBass and WOW Effect sliders. You can also optimize for your speaker type. Click to switch between Regular, Large, and Headphones. Video Settings Video Settings allow you to adjust the Hue, Brightness, Saturation, and Contrast.   You can also adjust the zoom settings by clicking Select video zoom settings.   Dolby Digital Settings Choose between Normal, Night, and Theater settings to adjust the audio for Dolby Digital content. This setting will only effect media with Dolby Digital sound. Looking for more ways to improve your media experience in WMP 12? Check out how to update metadata and cover art and how to share media with other Windows 7 computers on your home network. Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu LinuxHow To Rip a Music CD in Windows 7 Media CenterStream Media from Windows 7 to XP with VLC Media PlayerInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • Hello From South Florida

    - by Sam Abraham
    Fellow Blog Readers: I figured I use my first blog post on GeeksWithBlogs to introduce myself.   I recently relocated from Long Island, NY to South Florida where I joined a local company as Software Engineer specializing in technologies such as C#, ASP.Net 3.5, WCF, Silverlight, SQL Server 2008 and LINQ, to name a few. I am an MCP and MCTS ASP.Net 3.5, looking to get my .Net 4.0 certification soon.   Having been in industry for a few years so far, I figured I would share with you my take on the importance of being involved(at least attending) in local user groups.   I am a firm believer that besides using a certain technology, the best way to expand one’s knowledge is by sharing it with others and being equally open to learn from others just as much as you are willing to share what you know.   In my opinion, an important factor that makes a good developer stand-out is his/her ability to keep abreast with the latest and greatest even in areas outside his/her direct expertise.   Additionally, having spoken to various recruiters, technical user group attendees are always favorably looked upon as genuinely interested in their field and willing to take the initiative to expand their knowledge which offers job candidates good leverage when competing for jobs.   I believe I am very blessed to be in an area with a very strong and vibrant developer community. I found in the local .Net community leadership a genuine interest in constantly extending the opportunity to all developers to get more involved and encouraging those who are willing to take that initiative achieve their goal: Speak in meetings, volunteer at events or write and publish articles/blogs about latest and greatest technologies.   With Vishal Shukla (Site director for the West Palm Beach .Net User Group) traveling overseas, I have been extended the opportunity to come on board as site coordinator for FladotNet's WPB .Net User Group along with Venkata Subramanian, an opportunity which I gratefully accepted.   Being involved in running a .Net User Group will surely help me personally and professionally, but my real hope is to use this opportunity to assist in delivering the ultimate common-goal: spread the word about new .Net Technologies, help everybody get more involved and simply have fun learning new things.   With my introduction out of the way, in the next few days I will be posting some notes on an upcoming talk I will be giving about MVC2 and VS2010 in mid-April.   Environment.Exit(0); --Sam

    Read the article

  • Best of OTN - Week of Oct 21st

    - by CassandraClark-OTN
    This week's Best of OTN, for you, the best devs, dba's, sysadmins and architects out there!  In these weekly posts the OTN team will highlight the top content from each community; Architect, Database, Systems and Java.  Since we'll be publishing this on Fridays, we'll also mix in a little fun! Architect Community Top Content- The Road Ahead for WebLogic 12c | Edwin BiemondOracle ACE Edwin Biemond shares his thoughts on announced new features in Oracle WebLogic 12.1.3 & 12.1.4 and compares those upcoming releases to Oracle WebLogic 12.1.2. A Roadmap for SOA Development and Delivery | Mark NelsonDo you know the way to S-O-A? Mark Nelson does. His latest blog post, part of an ongoing series, will help to keep you from getting lost along the way. Updated ODI Statement of Direction | Robert SchweighardtHeads up Oracle Data Integrator fans! A new statement of product direction document is available, offering an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB). Bob Rhubart, Architect Community Manager Friday Funny - "Some people approach every problem with an open mouth." — Adlai E. Stevenson (October 23, 1835 – June 14, 1914) 23rd Vice President of the United States Database Community Top Content - Pre-Built Developer VMs (for Oracle VM VirtualBox)Heard all the chatter about Oracle VirtualBox? Over 1 million downloads per week and look: pre-built virtual appliances designed specifically for developers. Video: Big Data, or BIG DATA?Oracle Ace Director Ben Prusinski explains the differences.?? Webcast Series - Developing Applications in Oracle's Public CloudTime to get started on developing and deploying cloud applications by moving to the cloud. Good friend Gene Eun from Oracle's Cloud team posted this two-part Webcast series that has an overview and demonstration of the Oracle Database Cloud Service. Check out the demos on how to migrate your data to the cloud, extend your application with interactive reporting, and create and access RESTful Web services. Registration required, but so worth it! Laura Ramsey, Database Community Manager Friday Funny - Systems Community Top Content - Video: What Kind of Scalability is Better, Horizontal or Vertical?Rick Ramsey asks the question "Is Oracle's approach to large vertically scaled servers at odds with today's trend of combining lots and lots of small, low-cost servers systems with networking to build a cloud, or is it a better approach?" Michael Palmeter, Director of Solaris Product Management, and Renato Ribeiro, Director Product Management for SPARC Servers, discuss.Video: An Engineer Takes a Minute to Explain CloudBart Smaalders, long-time Oracle Solaris core engineer, takes a minute to explain cloud from a sysadmin point of view. ?Hands-On Lab: How to Deploy and Manage a Private IaaS Cloud Soup to nuts. This lab shows you how to set up and manage a private cloud with Oracle Enterprise Manager Cloud Control 12c in an Infrastructure as a service (IaaS) model. You will first configure the IaaS cloud as the cloud administrator and then deploy guest virtual machines (VMs) as a self-service user. Rick Ramsey, Systems Community Manager Friday Funny - Video: Drunk Airline Pilot - Dean Martin - Foster Brooks Java Community Top Content - Video: NightHacking Interview with James GoslingJames Gosling, the Father of Java, discusses robotics, Java and how to keep his autonomous WaveGliders in the ocean for weeks at a time. Live from Hawaii.  Video: Raspberry Pi Developer Challenge: Remote Controller A developer who knew nothing about Java Embedded or Raspberry Pi shows how he can now control a robot with his phone. The project was built during the Java Embedded Challenge for Raspberry Pi at JavaOne 2013.Java EE 7 Certification Survey - Participants NeededHelp us define how to server your training and certification needs for Java EE 7. Tori Wieldt, Java Community Manager Friday Funny - Programmers have a strong sensitivity to Yak's pheromone. Causes irresistible desire to shave said Yak. Thanks, @rickasaurus! To follow and take part in the conversation follow/like etc. at one or all of the resources below -  OTN TechBlog The Java Source Blog The OTN Garage Blog The OTN ArchBeat Blog @oracletechnet @java @OTN_Garage @OTNArchBeat @OracleDBDev OTN I Love Java OTN Garage OTN ArchBeat Oracle DB Dev OTN Java

    Read the article

  • Prefilling an SMS on Mobile Devices with the sms: Uri Scheme

    - by Rick Strahl
    Popping up the native SMS app from a mobile HTML Web page is a nice feature that allows you to pre-fill info into a text for sending by a user of your mobile site. The syntax is a bit tricky due to some device inconsistencies (and quite a bit of wrong/incomplete info on the Web), but it's definitely something that's fairly easy to do.In one of my Mobile HTML Web apps I need to share a current location via SMS. While browsing around a page I want to select a geo location, then share it by texting it to somebody. Basically I want to pre-fill an SMS message with some text, but no name or number, which instead will be filled in by the user.What worksThe syntax that seems to work fairly consistently except for iOS is this:sms:phonenumber?body=messageFor iOS instead of the ? use a ';' (because Apple is always right, standards be damned, right?):sms:phonenumber;body=messageand that works to pop up a new SMS message on the mobile device. I've only marginally tested this with a few devices: an iPhone running iOS 6, an iPad running iOS 7, Windows Phone 8 and a Nexus S in the Android Emulator. All four devices managed to pop up the SMS with the data prefilled.You can use this in a link:<a href="sms:1-111-1111;body=I made it!">Send location via SMS</a>or you can set it on the window.location in JavaScript:window.location ="sms:1-111-1111;body=" + encodeURIComponent("I made it!");to make the window pop up directly from code. Notice that the content should be URL encoded - HTML links automatically encode, but when you assign the URL directly in code the text value should be encoded.Body onlyI suspect in most applications you won't know who to text, so you only want to fill the text body, not the number. That works as you'd expect by just leaving out the number - here's what the URLs look like in that case:sms:?body=messageFor iOS same thing except with the ;sms:;body=messageHere's an example of the code I use to set up the SMS:var ua = navigator.userAgent.toLowerCase(); var url; if (ua.indexOf("iphone") > -1 || ua.indexOf("ipad") > -1) url = "sms:;body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); else url = "sms:?body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); location.href = url;and that also works for all the devices mentioned above.It's pretty cool that URL schemes exist to access device functionality and the SMS one will come in pretty handy for a number of things. Now if only all of the URI schemes were a bit more consistent (damn you Apple!) across devices...© Rick Strahl, West Wind Technologies, 2005-2013Posted in IOS  JavaScript  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Making the most of next weeks SharePoint 2010 developer training

    - by Eric Nelson
    [you can still register if you are free on the afternoons of 9th to 11th – UK time] We have 50+ registrations with more coming in – which is fantastic. Please read on to make the most of the training. Background We have structured the training to make sure that you can still learn lots during the three days even if you do not have SharePoint 2010 installed. Additionally the course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Which means if you have zero time between now and next Wednesday then you are still good to go. But if you can do some pre-work you will likely get even more out of the three days. Step 1: Check out the topics and resources available on-demand The course is based around a subset of the channel 9 training to allow you to easily dig deeper or look again at specific areas. Take a lap around the SharePoint 2010 Training Course on Channel 9 Download the SharePoint Developer Training Kit Step 2: Use a pre-configured Virtual Machine which you can download (best start today – it is large!) Consider using the VM we created If you don't have access to SharePoint 2010. You will need a 64bit host OS and bare minimum of 4GB of RAM. 8GB recommended. Virtual PC can not be used with this VM – Virtual PC only supports 32bit guests. The 2010-7a Information Worker VM gives you everything you need to develop for SharePoint 2010. Watch the Video on how to use this VM Download the VM Remember you only need to download the “parts” for the 2010-7a VM. There are 3 subtly different ways of using this VM: Easiest is to follow the advice of the video and get yourself a host OS of Windows Server 2008 R2 with Hyper-V and simply use the VM Alternatively you can take the VHD and create a “Boot to VHD” if you have Windows 7 Ultimate or Enterprise Edition. This works really well – especially if you are already familiar with “Boot to VHD” (This post I did will help you get started) Or you can take the VHD and use an alternative VM tool such as VirtualBox if you have a different host OS. NB: This tends to involve some work to get everything running fine. Check out parts 1 to 3 from Rolly and if you go with Virtual Box use an IDE controller not SATA. SATA will blue screen. Note in the screenshot below I also converted the vhd to a vmdk. I used the FREE Starwind Converter to do this whilst I was fighting blue screens – not sure its necessary as VirtualBox does now work with VHDs. or Step 3 – Install SharePoint 2010 on a 64bit Windows 7 or Vista Host I haven’t tried this but it is now supported. Check out MSDN. Final notes: I am in the process of securing a number of hosted VMs for ISVs directly managed by my team. Your Architect Evangelist will have details once I have them! Else we can sort out on the Wed. Regrettably I am unable to give folks 1:1 support on any issues around Boot to VHD, 3rd party VM products etc. Related Links: Check you are fully plugged into the work of my team – have you done these simple steps including joining our new LinkedIn group?

    Read the article

  • Exalytics and Oracle Business Intelligence Enterprise Edition (OBIEE) Partner Workshop

    - by mseika
    Workshop Description Oracle Fusion Middleware 11g is the #1 application infrastructure foundation. It enables enterprises to create and run agile and intelligent business applications and maximize IT efficiency by exploiting modern hardware and software architectures. Oracle Exalytics Business Intelligence Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers unmatched speed, visualizations and scalability for Business Intelligence and Enterprise Performance Management applications. This FREE hands-on, partner workshop highlights both the hardware and software components that are engineered to work together to deliver Oracle Exalytics - an optimized version of the industry-leading Oracle TimesTen In-Memory Database with analytic extensions, a highly scalable Oracle server designed specifically for in-memory business intelligence, and Oracle’s proven Business Intelligence Foundation with enhanced visualization capabilities and performance optimizations. This workshop will provide hands-on experience with Oracle's latest engineered system. Topics covered will include TimesTen In-Memory Database and the new Summary Advisor for Exalytics, the technical details (including mobile features) of the latest release of visualization enhancements for OBI-EE, and technical updates on Essbase. After taking this course, you will be well prepared to architect, build, demo, and implement an end-to-end Exalytics solution. You will also be able to extend your current analytical and enterprise performance management application implementations with numerous Oracle technologies specifically enhanced to take advantage of the compute capacity and in-memory capabilities of Oracle Exalytics.If you are a BI or Data Warehouse Architect, developer or consultant, you don’t want to miss this 3-day workshop. Register Now! Presentations Exalytics Architectural Overview Upgrade and Lifecycle Management Times Ten for Exalytics Summary Advisor Utility Essbase and EPM System on Exalytics Dashboard and Analysis Interactions OBIEE 11.1.1.6 Features and Advanced Topics Lab OutlineThe labs showcase Oracle Exalytics core components and functionality and provide expertise of Oracle Business Intelligence 11.1.1.6 new features and updates from prior releases. The hands-on activities are based on an Oracle VirtualBox image with software and training samples pre-installed. Lab Environment Setup Creating and Working with Oracle TimesTen In-Memory Database Running Summary Advisor Utility Working with Exalytics Visualization Features – Dashboard and Analysis Interactions Audience Oracle Partners BI and EPM Application Developers and Implementers System Integrators and Solution Consultants Data Warehouse Developers Enterprise Architects Prerequisites Experience and understanding of OBIEE 11g is required Previous attendance of Oracle Business Intelligence Foundation Suite Workshop or BIEE 11gIntroduction Workshop is highly recommended Good understanding of data warehousing and data modeling for reporting and analysis purpose Strong experience with database technologies preferred Equipment RequirementsThis workshop requires attendees to provide their own laptops for this class.Attendee laptops must meet the following minimum hardware/software requirements: Hardware Minimum 8GB RAM 60 GB free space (includes staging) USB 2.0 port (at least one available) It is strongly recommended that you bring a mouse. You will be working in a development environment and using the mouse heavily. Software One of the following operating systems: 64-bit Windows host/laptop OS 64-bit host/laptop OS with a Windows VM (XP, Server, or Win 7, BIC2g, etc.) Internet Explorer 7.x/8.x or Firefox 3.5.x WINRAR or 7ziputility to unzip workshop files: Download-able from http://www.win-rar.com/download.html Download-able from http://www.7zip.com/ Oracle VirtualBox 4.0.2 or higher Downloadable from http://www.virtualbox.org/wiki/Downloads CPU virtualization mode needs to be enabled. We will provide guidance on the day of the workshop. Attendees will be given a VirtualBox image containing a pre-installed Oracle Exalytics environment. Schedule This workshop is 3 days. - Times vary by country!9:00am: Sign-in and technical setup 9:30am: Workshop starts 5:00pm: Workshop ends Oracle Exalytics and Business Intelligence (OBIEE) Workshop December 11-13, 2012: Oracle BVP, Birmingham, UK Register Here. Questions? Send email to: [email protected] Oracle Platform Technologies Enablement Services

    Read the article

  • How does the Trash Can work, and where can I find official documentation, reference, or specification for it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always uses <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, it's annoying: if I have 15 users, I end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why does it never use it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash; it gives an error. Try to delete any file, it says "it can't move to trash". Ok, I know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why the error then? Bug? Why must I change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from the standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find a technical reference for Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what I am doing wrong, especially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for the sticky bit (VFAT, NTFS), it probably doesn't have for permissions either (at least VFAT surely doesn't). So what prevents one user from purging / restoring other users' ./Trash-xxx ? If one can read/write his own Trash, one can do the same for the whole drive, including other's trashes, correct? Or does Gnome have some kind of "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permissions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really prefer to use /.Trash/xxx format... For the root issue: for the / partition, I can use trash as root, and it goes to /root/.local/shate/Trash. But if I click on Nautilus "Trash" (as root), I get an error. Don't you? So files are correctly trashed, but I can't access it. All I can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc.). For non-/ partitions (or at least for VFAT/NTFS), I can not even use trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permanently delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really clutter the desktop and/or Nautilus. I'd rather have it pre-mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives. So, if Ubuntu/Gnome truly follows the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • How does Trash Can works? Where can i find official specification / documentation / reference about it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always use <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, its annoying: if i have 15 users, i end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why it never uses it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash, it gives error. Try to delete any file, it says "it cant move to trash". Ok, i know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why error then? Bug? Why do i must change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find technical reference about Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what am i doing wrong, specially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for sticky bit (VFAT, NTFS), it probably dont have for permitions either (at least VFAT surely doesnt). So what prevents one user for purging / restoring other users ./Trash-xxx ? If one can read/write his own Trash, he can also do the same for the whole drive, including other's trashes, isnt it? Or does Gnome has any "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permitions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really want to use /.Trash/xxx format... For the root issue: for the / partition, i can trash as root, and it goes to /root/.local/shate/Trash. But if i click on Nautilus "Trash" (as root), i get an error. Dont you? So files are correctly trashed, but i cant access it. All i can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc) For non-/ partitions (or at least for VFAT/NTFS), I can not even trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permantly delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really cluttter desktop and/or Nautilus. Id rather have it pre mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives Si, if Ubuntu/Gnome truly follow the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • PMI South Florida Job Fair 2010

    - by Sam Abraham
    The South Florida Chapter of the Project Management Institute is planning a Job Fair slated for September 2010. This year has seen a significant improvement in the job market with many surveyed companies indicating their intention to add temporary or permanent staff to their workforce in the near future.   The Job Fair Initiative fits well within the chapter's message and goal for this year: "Exercising Social Responsibility" - Our responsibility as PMI volunteers at all levels towards our members and surrounding community.   Our Free-to-members Annual Job Fair will play an important role in connecting Recruiters, Exhibitors and Job Seekers together thereby helping hiring companies gain access to a large talent pool at an affordable cost (Totally free in certain cases, details to be revealed once finalized) while giving job seekers centralized access to many reputable hiring companies in the South Florida area.   My involvement in the 2010 Job Fair started with a good conversation I had with Bernie Saenz, President and CEO of the South Florida PMI Chapter, in a networking event a few months ago. I had approached him with a few ideas in line with his goal to serve the community and our members given today's difficult economic climate. Bernie indicated that the Project Manager for the 2010 Job Fair had just been appointed and invited me to participate in this important initiative as a member of her team. I simply couldn't resist and gladly accepted the invitation.   I chose an initial role as Recruiter Relations Lead which entails developing documentation and timelines for our project plan with regards to Recruiter Engagement as well as reaching out to recruiting companies to meet target representation at the Job Fair.   Being heavily involved in the local Technical community has afforded me the privilege of coming in contact with many reputable Technology Recruiting companies. (As a matter of fact, I already have 2 interested very reputable IT recruiting firms willing to join us at the fair)   The excitement for me however will be finding and reaching out to recruiters in areas of Project Management and Leadership that I might not have been exposed to before including Finance, Healthcare and Marketing, to name a few.   Keep an eye in the upcoming few weeks for official announcements on the PMI South Florida Job Fair 2010.   Environment.Exit(0);   -Sam Abraham Site Director - West Palm Beach .Net User Group Recruiter Relations Lead - PMI South Florida Job Fair 2010 Project Lead - Mentoring Programs- PMI South Florida

    Read the article

  • Oracle BI Applications for Industry Sectors

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Oracle BI Applications already provide pre-built line-of-business analytic applications to over 4,000 customers: these expose the data otherwise locked inside ERP and CRM applications, giving the business user the analytics they need, and a greater ability to self-service ad-hoc queries. Now you can also take advantage of the pre-built Oracle BI Applications approach for industry sector specific analytics to streamline your client’s operations, offer better services, and increase profit margins. Find out more at http://www.oracle.com/us/solutions/business-analytics/analytic-applications/industry/overview/index.html. Retail Education Oracle Retail Merchandising Analytics Oracle Student Information Analytics Oracle Retail Customer Analytics Public Sector Financial Services Oracle Tax Analytics Oracle Financial Analytics Manufacturing Health Care Oracle Manufacturing Analytics Oracle Enterprise Healthcare Analytics Asset Intensive Oracle Clinical Development Analytics Oracle Enterprise Asset Management Analytics Oracle Operating Room Analytics Related Links Health Sciences Analytic Applications for Your Business Role Oracle Health Sciences Clinical Development Analytics Analytic Applications for Your Product Line Oracle Argus Analytics Oracle Business Intelligence Tools and Technology Communication Oracle Exalytics In-Memory Machine "The adoption of Oracle Financial Services Analytic Applications is of great significance to the bank's transition to more rigorous and risk-averse management practices."Yang Changxue, Project Manager Oracle Communications Data Model /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid windowtext 1.0pt; mso-border-alt:solid windowtext .5pt; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-border-insideh:.5pt solid windowtext; mso-border-insidev:.5pt solid windowtext; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • SQLAuthority News – Live Virtual Classroom New Trend in Technology

    - by nupurdave
    This blog post is by Nupur Dave, who is housewife and works from home. Changing times and a super busy lifestyle have rendered most of us powerless when it comes to doing what we love to do. I feel that a man never ceases to learn and his sole aim is to seek knowledge, and keep growing. However, our tight schedules and packed calendars mean that we really have to struggle to take some time out and follow the path towards learning. Like all working professionals with a family to take care of, I hardly found time to pursue my interests. However, it was getting increasingly important for me to upgrade my skills, not only for my personal quest for knowledge but to also substantiate my professional standing. When I came to know about Koenig Live Virtual Classroom from friends, it piqued my interest. I felt like it was the answer to all my concerns. Without wasting a single minute, I contacted Koenig for a demo class. Here are some of the highlights of Koenig LVC which instantly struck a chord in me: Online Training – Koenig offers 1-on-1 Online Training with the instructor at the other end. Doesn’t matter where I am sitting, in my office or at home, I can connect to my trainer from anywhere. Flexible Timings – The most comfortable part is you get to choose the time that suits you best. Economical -  No need to travel a thousand miles, the experts are right here on your computer screen. So no extra cost of travel, lodging and meals. 24X7 Lab Access: This is again a great feature that proved to be very beneficial in gaining a practical understanding of the subject. Powered by a data center, this facility offers students much to look forward to. 300+ Full Time Certified Experts: Be assured that you are learning from the best people in the industry. Customized Courses: Course material and training delivery is completely customized to suit your specific requirements. Official Courseware: The instructor teaches from official courseware of the vendor, depending on which course you have applied for – be it Microsoft, Cisco, Oracle or any other certification. Take Exam from Anywhere: Post completion of your IT training, you can take your certification exam from anywhere. Again, no need to travel a thousand miles to earn certified status. No Pre-Recorded Sessions: For those who still need clarification, it will be a live online classroom with trainers instructing you in real time. So you won’t get any surprises of getting pre-recorded sessions in place of your live instructor. Koenig’s Live Virtual Classroom methodology greatly exceeded my expectations. The instructor was highly skilled and very professional. I had concerns about the quality of AV on the computer screen, and whether I’ll be able to understand each topic in detail. However, the quality of video and sound, and the learning methodology used was impeccable. If you’re also facing time crunch and other commitment issues which are getting in the way of your professional development, LVC is the best solution to learn and grow. To know more about Student Experiences and Feedback of Koenig LVC, you can view their Testimonials. Reference: Nupur Dave (http://blog.sqlauthority.com)Filed under: SQL Authority

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • Upgrading to 9.2 - Info You Can Use (part 1)

    - by John Webb
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Rebekah Jackson joins our blog with a series of helpful hints on planning your upgrade to PeopleSoft 9.2.   Find Features & Capabilities There are many ways that you might learn about new features and capabilities within our releases, but if you aren’t sure where to start or how best to go about it, we recommend: Go to www.peoplesoftinfo.com Select the product line you are interested in, and go to the ‘Release Content’ tab Use the Video Feature Overviews (VFOs) on YouTube and the Cumulative Feature Overview (CFO) tool to find features and functions. The VFOs are brief recordings that summarize some of our most popular capabilities. These recordings are great tools for learning about new features, or helping others to visualize the value they can bring to your organization. The VFOs focus on some of our highest value and most compelling new capabilities. We also provide summarized ‘Why Upgrade to 9.2’ VFOs for HCM, Financials, and Supply Chain. The CFO is a spreadsheet based tool that allows you to select the release you are currently on, and compare it to the new release. It will return the list of all new features and capabilities, by product. You can browse the full list and / or highlight areas that look particularly interesting. Once you have a list of features by product, use the Release Value Proposition, Pre-Release Notes, and the Release Notes documents to get more details on and supporting value statements about why those features will be helpful. Gather additional data and supporting information, including: Go to the Product Data Sheets tab, and review the respective data sheets. These summarize the capabilities in the product, and provide succinct value statements for the product and capabilities. The PeopleSoft 9.2 Upgrade page, which has many helpful resources. Important Notes:   -  We recommend that you go through the above steps for the application areas of interest, as well as for PeopleTools. There are many areas in PeopleTools 8.53 and the 9.2 application releases that combine technical and functional capabilities to deliver transformative value.    - We also recommend that you review the Portal Solutions content. With your license to PeopleSoft applications, you have access to many of the most powerful capabilities within the Interaction Hub.    -  If you have recently upgraded to PeopleSoft 9.1, and an immediate upgrade to 9.2 is simply not realistic, you can apply the same approaches described here to find untapped capabilities in your current products. Many of the features in 9.2 were delivered first in our 9.1 Feature Packs. To find the Release Value Proposition, Pre-Release Notes, and Release Notes for these releases, search on ‘PeopleSoft 9.1 Documentation Home Page’ on My Oracle Support, and select your desired product area. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • High Resolution Timeouts

    - by user12607257
    The default resolution of application timers and timeouts is now 1 msec in Solaris 11.1, down from 10 msec in previous releases. This improves out-of-the-box performance of polling and event based applications, such as ticker applications, and even the Oracle rdbms log writer. More on that in a moment. As a simple example, the poll() system call takes a timeout argument in units of msec: System Calls poll(2) NAME poll - input/output multiplexing SYNOPSIS int poll(struct pollfd fds[], nfds_t nfds, int timeout); In Solaris 11, a call to poll(NULL,0,1) returns in 10 msec, because even though a 1 msec interval is requested, the implementation rounds to the system clock resolution of 10 msec. In Solaris 11.1, this call returns in 1 msec. In specification lawyer terms, the resolution of CLOCK_REALTIME, introduced by POSIX.1b real time extensions, is now 1 msec. The function clock_getres(CLOCK_REALTIME,&res) returns 1 msec, and any library calls whose man page explicitly mention CLOCK_REALTIME, such as nanosleep(), are subject to the new resolution. Additionally, many legacy functions that pre-date POSIX.1b and do not explicitly mention a clock domain, such as poll(), are subject to the new resolution. Here is a fairly comprehensive list: nanosleep pthread_mutex_timedlock pthread_mutex_reltimedlock_np pthread_rwlock_timedrdlock pthread_rwlock_reltimedrdlock_np pthread_rwlock_timedwrlock pthread_rwlock_reltimedwrlock_np mq_timedreceive mq_reltimedreceive_np mq_timedsend mq_reltimedsend_np sem_timedwait sem_reltimedwait_np poll select pselect _lwp_cond_timedwait _lwp_cond_reltimedwait semtimedop sigtimedwait aiowait aio_waitn aio_suspend port_get port_getn cond_timedwait cond_reltimedwait setitimer (ITIMER_REAL) misc rpc calls, misc ldap calls This change in resolution was made feasible because we made the implementation of timeouts more efficient a few years back when we re-architected the callout subsystem of Solaris. Previously, timeouts were tested and expired by the kernel's clock thread which ran 100 times per second, yielding a resolution of 10 msec. This did not scale, as timeouts could be posted by every CPU, but were expired by only a single thread. The resolution could be changed by setting hires_tick=1 in /etc/system, but this caused the clock thread to run at 1000 Hz, which made the potential scalability problem worse. Given enough CPUs posting enough timeouts, the clock thread could be a performance bottleneck. We fixed that by re-implementing the timeout as a per-CPU timer interrupt (using the cyclic subsystem, for those familiar with Solaris internals). This decoupled the clock thread frequency from timeout resolution, and allowed us to improve default timeout resolution without adding CPU overhead in the clock thread. Here are some exceptions for which the default resolution is still 10 msec. The thread scheduler's time quantum is 10 msec by default, because preemption is driven by the clock thread (plus helper threads for scalability). See for example dispadmin, priocntl, fx_dptbl, rt_dptbl, and ts_dptbl. This may be changed using hires_tick. The resolution of the clock_t data type, primarily used in DDI functions, is 10 msec. It may be changed using hires_tick. These functions are only used by developers writing kernel modules. A few functions that pre-date POSIX CLOCK_REALTIME mention _SC_CLK_TCK, CLK_TCK, "system clock", or no clock domain. These functions are still driven by the clock thread, and their resolution is 10 msec. They include alarm, pcsample, times, clock, and setitimer for ITIMER_VIRTUAL and ITIMER_PROF. Their resolution may be changed using hires_tick. Now back to the database. How does this help the Oracle log writer? Foreground processes post a redo record to the log writer, which releases them after the redo has committed. When a large number of foregrounds are waiting, the release step can slow down the log writer, so under heavy load, the foregrounds switch to a mode where they poll for completion. This scales better because every foreground can poll independently, but at the cost of waiting the minimum polling interval. That was 10 msec, but is now 1 msec in Solaris 11.1, so the foregrounds process transactions faster under load. Pretty cool.

    Read the article

  • Rails3 server and bundler error: uninitialized constant Bundler (NameError)

    - by .yandex.rurap-kasta
    I just install rails 3 and all gems that it need, but when I try to start server, it says about problem in boot script. [rap-kasta@acerAspire testR3]$ script/rails server /home/rap-kasta/tmp/testR3/config/boot.rb:7:in `rescue in <top (required)>': uninitialized constant Bundler (NameError) from /home/rap-kasta/tmp/testR3/config/boot.rb:2:in `<top (required)>' from script/rails:9:in `require' from script/rails:9:in `<main> So, I tried to reinstall Bundler, install "pre"-version (but really it has version number lower then i install by gem install bundler Now there are next gems in system: abstract (1.0.0) actionmailer (3.0.0.beta, 2.3.5, 2.3.4) actionpack (3.0.0.beta, 2.3.5, 2.3.4) activemodel (3.0.0.beta) activerecord (3.0.0.beta, 2.3.5, 2.3.4) activeresource (3.0.0.beta, 2.3.5, 2.3.4) activesupport (3.0.0.beta, 2.3.5, 2.3.4) arel (0.2.1, 0.2.pre) builder (2.1.2) bundler (0.9.5) erubis (2.6.5) fxri (0.3.7) fxruby (1.6.20) i18n (0.3.3) jemini (2010.1.24, 2010.1.5) mail (2.1.2) memcache-client (1.7.8) mime-types (1.16) mysql (2.8.1) nifty-generators (0.3.2, 0.3.0) rack (1.1.0, 1.0.1, 1.0.0) rack-mount (0.5.1, 0.4.0) rack-openid (0.2.3, 0.2.2) rack-test (0.5.3) rails (3.0.0.beta, 2.3.5, 2.3.4) railties (3.0.0.beta) rake (0.8.7) rawr (1.3.8) RedCloth (4.2.2) ruby-mysql (3.0.2) ruby-openid (2.1.7) rubygems-update (1.3.5) rubyzip (0.9.4, 0.9.1) rubyzip2 (2.0.1) sqlite3-ruby (1.2.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.2, 0.13.1) tzinfo (0.3.16) Also, there is same error with rails console and similar with bundle check: [rap-kasta@acerAspire testR3]$ bundle check /usr/lib/ruby/gems/1.9.1/gems/bundler-0.9.5/bin/bundle:12:in `rescue in <top (required)>': uninitialized constant Bundler::BundlerError (NameError) from /usr/lib/ruby/gems/1.9.1/gems/bundler-0.9.5/bin/bundle:10:in `<top (required)>' from /usr/bin/bundle:19:in `load' from /usr/bin/bundle:19:in `<main>'

    Read the article

  • Instantiate an .aspx that is an embedded resource of an assembly

    - by asbjornu
    I have an ASP.NET (MVC) application in which I would like to load WebForms .aspx files that are embedded as resources in 3rd party assemblies. The reason I want to do this is to make a sort of "plug-in" system where a .dll file can be dropped in a folder and then picked up at runtime to provide additional functionality to the base application. I've gotten the plugin system to work (I'm using MEF) with plugins written in ASP.NET MVC (Views and Controllers), but for plain old ASP.NET (Pages), I've got myself into a bit of a problem. For the execution of the embedded .aspx file (which, in the usual WebForm way Inherits="My.BasePage") I've created a custom VirtualPathProvider, ResourceFile ControllerFactory and PageController. Within the PageController I've overridden the Execute(RequestContext) method and within it I'm trying to compile the .aspx with BuildManager.CreateInstanceFromVirtualPath(virtualPath, type). When doing this, I get the error message "Could not load type 'My.BasePage'", even though I'm giving the BuildManager the System.Type of My.BasePage in the call to CreateInstanceFromVirtualPath. I seem to be stuck at this point. I've tried to Server.Transfer() to the custom VirtualPathProvider handled URL to the same .aspx file, but that fails with the same error message. How can I help BuildManager find out where My.BasePage is defined and how come the Type requiredBaseType parameter of CreateInstanceFromVirtualPath seems to be ignored? I've tried to call BuildManager.AddReferencedAssembly(), but that only fails with "This method can only be called during the application's pre-start initialization stage". MSDN says: "The method must be called during the Application_PreStartInit stage of the application", but I have no such event in my HttpApplication object and find absolutely zero information about it on the internet. Either way, I don't want to be calling BuildManager.AddReferencedAssembly() in or before the Application_Start event, since that makes me have to recycle the whole application to be able to add new plugins to the system. Does anyone have any clues? Any other ideas on how I can "execute" an .aspx file that is embedded as a resource within an assembly through reflection? Can I for instance pre-compile the .aspx file within the same assembly as the base Page class it inherits?

    Read the article

  • How to convert datatable to json string using json.net?

    - by Pandiya Chendur
    How to convert datatable to json using json.net? Any suggestion... I ve downloaded the necessary binaries... Which class should i use to get the conversion of my datatable to json? Thus far used this method to get json string by passing my datatable... public string GetJSONString(DataTable table) { StringBuilder headStrBuilder = new StringBuilder(table.Columns.Count * 5); //pre-allocate some space, default is 16 bytes for (int i = 0; i < table.Columns.Count; i++) { headStrBuilder.AppendFormat("\"{0}\" : \"{0}{1}¾\",", table.Columns[i].Caption, i); } headStrBuilder.Remove(headStrBuilder.Length - 1, 1); // trim away last , StringBuilder sb = new StringBuilder(table.Rows.Count * 5); //pre-allocate some space sb.Append("{\""); sb.Append(table.TableName); sb.Append("\" : ["); for (int i = 0; i < table.Rows.Count; i++) { string tempStr = headStrBuilder.ToString(); sb.Append("{"); for (int j = 0; j < table.Columns.Count; j++) { table.Rows[i][j] = table.Rows[i][j].ToString().Replace("'", ""); tempStr = tempStr.Replace(table.Columns[j] + j.ToString() + "¾", table.Rows[i][j].ToString()); } sb.Append(tempStr + "},"); } sb.Remove(sb.Length - 1, 1); // trim last , sb.Append("]}"); return sb.ToString(); } Now i thought of using json.net but dont know where to get started....

    Read the article

  • Selection Highlight in NSCollectionView

    - by Hooligancat
    On some occasions my head just hurts from banging it against the Cocoa wall. Today is one of those days. I have a working NSCollectionView with one minor, but critical, exception. Getting and highlighting the selected item within the collection. I've had all this working prior to Snow Leopard, but something appears to have changed and I can't quite place my finger on it, so I took my NSCollectionView right back to a basic test and followed Apple's documentation for creating an NSCollectionView here: http://developer.apple.com/mac/library/DOCUMENTATION/Cocoa/Conceptual/CollectionViews/Introduction/Introduction.html The collection view works fine following the quick start guide. However, this guide doesn't discuss selection other than "There are such features as incorporating image views, setting objects as selectable or not selectable and changing colors if they are selected". Using this as an example I went to the next step of binding the Array Controller to the NSCollectionView with the controller key selectionIndexes, thinking that this would bind any selection I make between the NSCollectionView and the array controller and thus firing off a KVO notification. I also set the NSCollectionView to be selectable in IB. There appears to be no selection delegate for NSCollectionView and unlike most Cocoa UI views, there appears to be no default selected highlight. So my problem really comes down to a related issue, but two distinct questions. How do I capture a selection of an item? How do I show a highlight of an item? NSCollectionView's programming guides seem to be few and far between and most searches via Google appear to pull up pre-Snow Leopard implementations, or use the view in a separate XIB file. For the latter (separate XIB file for the view), I don't see why this should be a pre-requisite otherwise I would have suspected that Apple would not have included the view in the same bundle as the collection view item. I know this is going to be a "can't see the wood for the trees" issue - so I'm prepared for the "doh!" moment. As usual, any and all help much appreciated.

    Read the article

  • CSS Caching and User-CUstomized CSS - Any Suggestions?

    - by Joe
    I'm just trying to figure out the best approach here... my consideration is that I'd like my users to be able to customize the colors of certain elements on the page. These are not pre-made css they can choose from, but rather styles that can be edited using javascript to change the style, so they cannot be "pre-made" into separate style sheets. I am also generating the css into a cache directory since I am generating the css through a PHP script. I am thinking that, perhaps I should: 1) If cached css file doesn't exist, create css file using the website default style settings. 2) If cached css file does exist, check if user has custom settings, if so, edit the cache file before displaying it. Btw, when I refer to "cached file" I mean the PHP generated css document. My goal is to prevent the need to have PHP re-generate the css file each time, whilst still allowing users, when logged in, to have their customized css settings applied. I will store these settings in a database most likely so when they return it is saved for them when they login.

    Read the article

  • DB2 with mod_perl not working, works fine in CGI

    - by Matthew
    Hi, I'm having issues with getting DBI's IBM DB2 driver to work with mod_perl. My test script is: #!/usr/bin/perl use CGI; use Data::Dumper; use DBI; $q = CGI->new; print $q->header; print $q->start_html(); $dsn = "DBI:DB2:SAMPLE"; $username = "username"; $password = "password"; $dbc = DBI->connect($dsn, $username, $password); $sth = $dbc->prepare("SELECT * FROM SOME_TABLE WHERE FIELD='SOMETHING'"); $sth->execute(); $row = $sth->fetchrow_hashref(); print "<pre>".$q->escapeHTML(Dumper($row))."</pre>"; print $q->end_html; This script works as CGI but not under mod_perl. I get this error in apache's error log: DBD::DB2::dr connect warning: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /usr/lib/perl5/site_perl/5.8.8/Apache/DBI.pm line 190. DBI connect('SAMPLE','username',...) failed: [unixODBC][Driver Manager]Data source name not found, and no default driver specified at /data/www/perl/test.pl line 15 First of all, why is it using ODBC? The native DB2 driver is installed (hence it works as CGI). Running Apache 2.2.3, mod_perl 2.0.4 under RHEL5. This guy had the same problem as me: http://www.mail-archive.com/[email protected]/msg22909.html But I have no idea how he fixed it. What does mod_php4 have to do with mod_perl? Any help would be greatly appreciated, I'm having no luck with google.

    Read the article

  • Javascript error when integrating django-tinymce and django-filebrowser

    - by jwesonga
    I've set up django-filebrowser in my app without any bugs, I already had django-tinymce set up and it loads the editor in the admin forms. I now want to use django-filebrowser with django-tinymce, but I keep getting a weird javascript error when I click on "Image URL" in the Image popup: r is undefined the error is js/tiny_mce/tiny_mce.js My settings.py file has the following configuration: TINYMCE_JS_URL=MEDIA_URL + 'js/tiny_mce/tiny_mce.js' TINYMCE_DEFAULT_CONFIG = { 'mode': "textareas", 'theme': "advanced", 'language': "en", 'skin': "o2k7", 'dialog_type': "modal", 'object_resizing': True, 'cleanup_on_startup': True, 'forced_root_block': "p", 'remove_trailing_nbsp': True, 'theme_advanced_toolbar_location': "top", 'theme_advanced_toolbar_align': "left", 'theme_advanced_statusbar_location': "none", 'theme_advanced_buttons1': "formatselect,styleselect,bold,italic,underline,bullist,numlist,undo,redo,link,unlink,image,code,template,visualchars,fullscreen,pasteword,media,search,replace,charmap", 'theme_advanced_buttons2': "", 'theme_advanced_buttons3': "", 'theme_advanced_path': False, 'theme_advanced_blockformats': "p,h2,h3,h4,div,code,pre", 'width': '700', 'height': '300', 'plugins': "advimage,advlink,fullscreen,visualchars,paste,media,template,searchreplace", 'advimage_update_dimensions_onchange': True, 'file_browser_callback': "CustomFileBrowser", 'relative_urls': False, 'valid_elements' : "" + "-p," + "a[href|target=_blank|class]," + "-strong/-b," + "-em/-i," + "-u," + "-ol," + "-ul," + "-li," + "br," + "img[class|src|alt=|width|height]," + "-h2,-h3,-h4," + "-pre," + "-code," + "-div", 'extended_valid_elements': "" + "a[name|class|href|target|title|onclick]," + "img[class|src|border=0|alt|title|hspace|vspace|width|height|align|onmouseover|onmouseout|name]," + "br[clearfix]," + "-p[class<clearfix?summary?code]," + "h2[class<clearfix],h3[class<clearfix],h4[class<clearfix]," + "ul[class<clearfix],ol[class<clearfix]," + "div[class]," } TINYMCE_FILEBROWSER = False TINYMCE_COMPRESSOR = False i've tried switching back to older versions of tinyMCE Javascript but nothing seems to work. Would appreciate some help

    Read the article

  • PHP CURL Google Calendar using Private URL

    - by MooCow
    I'm trying to get an array of events from Google Calendar using the Private URL. I read the Google API document but I want to try doing this without using the ZEND library since I have no idea what the eventual server file structure is and avoid having other people edit the codes. I also did a search before posting and ran into the same condition where PHP CURL_EXEC returns false with the URL but I get a JSON file if the URL is open using a web browser. Since I'm using the Private URL, do I really need to authenticate against the Google server using ZEND? I'm trying to have PHP clean up the array before encoding it for Flash. $URL = <string of the private URL from Google Calendar> $ch = curl_init($URL); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); $data = curl_exec($ch); curl_close($ch); $result = json_decode($data); print '<pre>'.var_export($data,1).'</pre>'; Screen output >>> false

    Read the article

  • Good-practices: How to reuse .csproj and .sln files to create your MSBuild script for CI ?

    - by Gishu
    What is the painless/maintainable way of using MSBuild as your build runner ? (Forgive the length of this post) I was just trying my hand at TeamCity (which I must say is awesome w.r.t. learning curve and out of the box functionality). I got an SVN MSBuild NUnit NCover combo working. I was curious as to how moderate to large projects are using MSBuild - I've just pointed MSBuild to my Main sln file. I've spent some time with NAnt some years ago and I found MSBuild to be a bit obtuse. The docs are too dense/detailed for a beginner. MSBuild seems to have some special magic to handle .sln files ; I tried my hand at writing a custom build script by hand, linking/including .csproj files in order (such that I could have custom pre-post build tasks). However it threw up (citing duplicate target imports). I'm assuming most devs wouldn't want to go messing around with msbuild proj files - they'd be making changes to the .csproj and .sln files. Is there some tool / MSBuild task that reverse-engineers a new script from an existing .sln + its .csproj files that I'm unaware of ? If I'm using MSBuild just to do the compile step, I might as well use Nant with an exec task to MSBuild for compiling the solution ? I've this nagging feeling that I'm missing something obvious. My end-goal here is to have a MSBuild build script which builds the solution that acts as a build script instead of a compile step. Allows custom pre/post tasks. (e.g. call nunit to run a nunit project (which seems to be not yet supported via the teamcity web UI)) stays out of the way of the developers making changes to the solution. No redundancy ; shouldn't require devs to make the same change in 2 places

    Read the article

  • How to delete a large cookie that causes Apache to 400

    - by jakemcgraw
    I've come across an issue where a web application has managed to create a cookie on the client, which, when submitted by the client to Apache, causes Apache to return the following: HTTP/1.1 400 Bad Request Date: Mon, 08 Mar 2010 21:21:21 GMT Server: Apache/2.2.3 (Red Hat) Content-Length: 7274 Connection: close Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Size of a request header field exceeds server limit.<br /> <pre> Cookie: ::: A REALLY LONG COOKIE ::: </pre> </p> <hr> <address>Apache/2.2.3 (Red Hat) Server at www.foobar.com Port 80</address> </body></html> After looking into the issue, it would appear that the web application has managed to create a really long cookie, over 7000 characters. Now, don't ask me how the web application was able to do this, I was under the impression browsers were supposed to prevent this from happening. I've managed to come up with a solution to prevent the cookies from growing out of control again. The issue I'm trying to tackle is how do I reset the large cookie on the client if every time the client tries to submit a request to Apache, Apache returns a 400 client error? I've tried using the ErrorDocument directive, but it appears that Apache bails on the request before reaching any custom error handling.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >