Search Results

Search found 1098 results on 44 pages for 'precise'.

Page 36/44 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • In Outlook 2007 Rules and Alerts, EXACTLY what does "my name" mean?

    - by Cornan The Iowan
    I can't find any definition of "my name" in the Outlook 2007 Rules and Alerts or on the Internet. In this case our email system presents two email addresses for me to the outside world. I'd like BOTH of these addresses to be recognized as being "me". I thought that perhaps if I understood the definition of "my name" in the rules, I could set up my mailbox(es) appropriately. Of course if "my name" actually means a single email address, then I won't be able to do so, but if it means "any email on my account" or "any account meeting [some criteria]", then I might be successful. I'd like to note a subtlety in the rules definitions. While there is a rule named "where my name is in the To or Cc box", the only rule for explicit addresses is "sent to people or distribution list" (I'm assuming that "sent to" means "in the To:" list rather than "in the To: or cc: lists"). Summing up. My preference: 1) Understanding the precise definition of "my name" so that I can use "where my name is in the To or Cc box" to capture both email addresses from my account. 2) Learning the "sent to people or distribution list" actually includes Cc: entries (I can test this myself of course) 3) Any other solution that will let me define a rule where my secondary email address will be detected in EITHER the To: or Cc: boxes.

    Read the article

  • How to export and import an user profile from one Quassel core to another?

    - by Zertrin
    I have been using Quassel as my bouncer for IRC for quite a long time now. We (a group of administrators of a small network) have set up a shared Quassel core with many users on the same core. But now I would like to export everything related to my user account from the Quassel database on this core, in order to re-import it later in another Quassel core on my own server. Unfortunately, while a feature for adding users has been implemented into Quassel, nothing is so far provided for either exporting or deleting an user. (if deleting-a-user feature was available, I could have made a copy of the current database, delete all the other users leaving only mine, and use this resulting database on my own server, while leaving the first one untouched on the shared server) Despite extensive research on the Internet on this subject, I've found so far no solution. I have to precise that the backend database for the core has been migrated from the default SQLite backend to a PosgreSQL backend as the database grew sensibly (over 1,5 GB for now). However I'd be glad to hear from any working solution (SQLite or PostgreSQL backend) describing a way to export the data related to a specific user profile and then re-import-it in a new Quasselcore database.

    Read the article

  • How to back up initial state of external backup drive?

    - by intuited
    I've picked up an HP Simplesave external drive. It comes with some fancy software that is of no use to me because I don't use Windows. Like many current consumer-targeted backup drives, the backup software is actually contained on the drive itself. I'd like to save the drive's initial state so that I can restore it if I decide to sell it. The backup box itself is somewhat customized: in addition to the hard drive device, it presents a CDROM-like device on /dev/sr0. I gather that the purpose of this cdrom device is to bootstrap via Windows autoplay the backup application which lives on the disk itself. I wouldn't suppose any guarantees about how it does this, so it seems important to preserve the exact state of the disk. The drive is formatted with a single 500GB NTFS partition. My initial thought was to use dd to dump the disk (/dev/sdb) itself, but this proved impractical, as the resulting file was not sparse. This seemed to be because the NTFS empty space is not filled with zeroes, but with a repeating series of 16 bytes. I tried gzipping the output of dd. This reduced to the file to a manageable size — the first 18GB was compressed to 81MB, versus 47MB to tarball the contents of the mounted filesystem — but it was very slow on my admittedly somewhat derelict Pentium M processor. The time to do that first 18GB was about 30 minutes. So I've resorted to dumping the disk state and partition data separately. I've dumped the partition state with sfdisk -d /dev/sdb > sfdisk.-d.out I've also created a compressed image of the NTFS partition (the only one on the disk) with ntfsclone --save-image --output - /dev/sdb1 | gzip -c > ntfsclone.img.gz Is there anything else I should do to ensure that I can restore the precise original state of the drive?

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Installing the Updated XP Mode which Requires no Hardware Virtualization

    - by Mysticgeek
    Good news for those of you who have a computer without Hardware Virtualization, Microsoft had dropped the requirement so you can now run XP Mode on your machine. Here we take a look at how to install it and getting working on your PC. Microsoft has dropped the requirement that your CPU supports Hardware Virtualization for XP Mode in Windows 7. Before this requirement was dropped, we showed you how to use SecureAble to find out if your machine would run XP Mode. If it couldn’t, you might have gotten lucky with turning Hardware Virtualization on in your BIOS, or getting an update that would enable it. If not, you were out of luck or would need a different machine. Note: Although you no longer need Hardware Virtualization, you still need Professional, Enterprise, or Ultimate version of Windows 7. Download Correct Version of XP Mode For this article we’re installing it on a Dell machine that doesn’t support Hardware Virtualization on Windows 7 Ultimate 64-bit version. The first thing you’ll want to do is go to the XP Mode website and select your edition of Windows 7 and language. Then there are three downloads you’ll need to get from the page. Windows XP Mode, Windows Virtual PC, and the Windows XP Mode Update (All Links Below). Windows genuine validation is required before you can download the XP Mode files. To make the validation process easier you might want to use IE when downloading these files and validating your version of Windows. Installing XP Mode After validation is successful the first thing to download and install is XP Mode, which is easy following the wizard and accepting the defaults. The second step is to install KB958559 which is Windows Virtual PC.   After it’s installed, a reboot is required. After you’ve come back from the restart, you’ll need to install KB977206 which is the Windows XP Mode Update.   After that’s installed, yet another restart of your system is required. After the update is configured and you return from the second reboot, you’ll find XP Mode in the Start menu under the Windows Virtual PC folder. When it launches accept the license agreement and click Next. Enter in your log in credentials… Choose if you want Automatic Updates or not… Then you’re given a message saying setup will share the hardware on your computer, then click Start Setup. While setup completes, you’re shown a display of what XP Mode does and how to use it. XP Mode launches and you can now begin using it to run older applications that are not compatible with Windows 7. Conclusion This is a welcome news for many who want the ability to use XP Mode but didn’t have the proper hardware to do it. The bad news is users of Home versions of Windows still don’t get to enjoy the XP Mode feature officially. However, we have an article that shows a great workaround – Create an XP Mode for Windows 7 Home Versions & Vista. Download XP Mode, Windows Virtual PC, and Windows XP Mode Update Similar Articles Productive Geek Tips Our Look at XP Mode in Windows 7Run XP Mode on Windows 7 Machines Without Hardware VirtualizationInstall XP Mode with VirtualBox Using the VMLite PluginUnderstanding the New Hyper-V Feature in Windows Server 2008How To Run XP Mode in VirtualBox on Windows 7 (sort of) TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer

    Read the article

  • SQL SERVER – Video – Beginning Performance Tuning with SQL Server Execution Plan

    - by pinaldave
    Traveling can be most interesting or most exhausting experience. However, traveling is always the most enlightening experience one can have. While going to long journey one has to prepare a lot of things. Pack necessary travel gears, clothes and medicines. However, the most essential part of travel is the journey to the destination. There are many variations one prefer but the ultimate goal is to have a delightful experience during the journey. Here is the video available which explains how to begin with SQL Server Execution plans. Performance Tuning is a Journey Performance tuning is just like a long journey. The goal of performance tuning is efficient and least resources consuming query execution with accurate results. Just as maps are the most essential aspect of performance tuning the same way, execution plans are essentially maps for SQL Server to reach to the resultset. The goal of the execution plan is to find the most efficient path which translates the least usage of the resources (CPU, memory, IO etc). Execution Plans are like Maps When online maps were invented (e.g. Bing, Google, Mapquests etc) initially it was not possible to customize them. They were given a single route to reach to the destination. As time evolved now it is possible to give various hints to the maps, for example ‘via public transport’, ‘walking’, ‘fastest route’, ‘shortest route’, ‘avoid highway’. There are places where we manually drag the route and make it appropriate to our needs. The same situation is with SQL Server Execution Plans, if we want to tune the queries, we need to understand the execution plans and execution plans internals. We need to understand the smallest details which relate to execution plan when we our destination is optimal queries. Understanding Execution Plans The biggest challenge with maps are figuring out the optimal path. The same way the  most common challenge with execution plans is where to start from and which precise route to take. Here is a quick list of the frequently asked questions related to execution plans: Should I read the execution plans from bottoms up or top down? Is execution plans are left to right or right to left? What is the relational between actual execution plan and estimated execution plan? When I mouse over operator I see CPU and IO but not memory, why? Sometime I ran the query multiple times and I get different execution plan, why? How to cache the query execution plan and data? I created an optimal index but the query is not using it. What should I change – query, index or provide hints? What are the tools available which helps quickly to debug performance problems? Etc… Honestly the list is quite a big and humanly impossible to write everything in the words. SQL Server Performance:  Introduction to Query Tuning My friend Vinod Kumar and I have created for the same a video learning course for beginning performance tuning. We have covered plethora of the subject in the course. Here is the quick list of the same: Execution Plan Basics Essential Indexing Techniques Query Design for Performance Performance Tuning Tools Tips and Tricks Checklist: Performance Tuning We believe we have covered a lot in this four hour course and we encourage you to go over the video course if you are interested in Beginning SQL Server Performance Tuning and Query Tuning. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Execution Plan

    Read the article

  • SQLAuthority News – The Best Quotes of “Who Wrote This?” Contest

    - by pinaldave
    I am a frequent reader of Brent Ozar PLF, it is one of my favorite blogs. A recent post announced a “Who Wrote This?” contest to see if readers could tell their three contributors apart based on some writing samples. Here are my favorite lines from the sample paragraphs, from each of the three “mystery authors.” Topic 1: Working with Bad Managers Mystery Author A – “Working with bad managers means working against my own happiness, and I’ve come to learn that there’s no changing bad managers.” I love this line because, as anyone who has had a bad manager knows, often a lot of self-doubt rises up. We all have to remember that sometimes the problem is out of our control. Mystery Author B – “Mentor your manager just like you would mentor a junior DBA.” Having a bad manager can be extremely depressing, and we often feel out of control. But we all need to remember that our work is a two-way street, and that sometimes we can subtly influence those above us. Mystery Author C – “The trick to working for all bad managers is to remember that they aren’t your parent. Take charge of your career.” We all also need to learn not to play the blame game. Would you rather stay in a place where you are unhappy, or would you rather take charge of your life? I hope most people would pick the latter. Topic 2: Working with Remote Teams Mystery Author A – “Like almost anything else the key is to make sure that everyone on the team has an understanding of how and when communication will occur.” Communication is so important. I cannot over emphasize how much. And this one line captures how I feel and even communicates the idea clearly! Mystery Author B – “The key to remote team success is verifiable trust: feeling confident that invisible team members are doing the right amount of the right thing at the right time.” I think this line not only captures the key aspects of remote work – verifiable work and trust – but there were so many lines that followed that I loved and could not fit here. The whole paragraph is a list for successful remote work. Everyone could benefit from reading it. Mystery Author C – “What seems clear, precise, and specific in one time zone comes across as vague, soupy, and just plain weird in another.” You know what? I just love this description. The author is right – sometimes vague e-mails really do seem soupy and weird! Topic 3: Working with Your Nemesis Mystery Author A – “Every job is temporary, but your reputation stays with you.” Everyone needs to remember this. The workplace is meant to be a professional arena, and many people have the opinion that work is temporary and disposable. No one wants to work with co-worker like that. Mystery Author B – “Unhealthy conflict is going to lead to leaving three week old tuna fish sandwiches in someone’s desk drawer.” Sometimes humor really is the best policy! Mystery Author C – “Oh no, it’s that guy.” This might seem like a weird phrase to choose as my favorite from an entire paragraph. But the whole piece was written in the form of a story of co-workers getting drunk and plotting against a nemesis. It was too funny to overlook, but too long to post here. A must read! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • How to reproject a shapefile from WGS 84 to Spherical/Web Mercator projection.

    - by samkea
    Definitions: You will need to know the meaning of these terms below. I have given a small description to the acronyms but you can google and know more about them. #1:WGS-84- World Geodetic Systems (1984)- is a standard reference coordinate system used for Cartography, Geodesy and Navigation. #2: EPGS-European Petroleum Survey Group-was a scientific organization with ties to the European petroleum industry consisting of specialists working in applied geodesy, surveying, and cartography related to oil exploration. EPSG::4326 is a common coordinate reference system that refers to WGS84 as (latitude, longitude) pair coordinates in degrees with Greenwich as the central meridian. Any degree representation (e.g., decimal or DMSH: degrees minutes seconds hemisphere) may be used. Which degree representation is used must be declared for the user by the supplier of data. So, the Spherical/Web Mercator projection is referred to as EPGS::3785 which is renamed to EPSG:900913 by google for use in googlemaps. The associated CRS(Coordinate Reference System) for this is the "Popular Visualisation CRS / Mercator ". This is the kind of projection that is used by GoogleMaps, BingMaps,OSM,Virtual Earth, Deep Earth excetra...to show interactive maps over the web with thier nearly precise coordinates.  Reprojection: After reading alot about reprojecting my coordinates from the deepearth project on Codeplex, i still could not do it. After some help from a colleague, i got my ball rolling.This is how i did it. #1 You need to download and open your shapefile using Q-GIS; its the one with the biggest number of coordinate reference systems/ projections. #2 Use the plugins menu, and enable ftools and the WFS plugin. #3 Use the Vector menu--> Data Management Tools and choose define current projection. Enable, use predefined reference system and choose WGS 84 coodinate system. I am personally in zone 36, so i chose WGS84-UTM Zone 36N under ( Projected Coordinate Systems--> Universal Transverse Mercator) and click ok. #4 Now use the Vector menu--> Data Management Tools and choose export to new projection. The same dialog will pop-up. Now choose WGS 84 EPGS::4326 under Geodetic Coordinate Systems. My Input user Defined Spatial Reference System should looks like this: +proj=tmerc +lat_0=0 +lon_0=33 +k=0.9996 +x_0=500000 +y_0=200000 +ellps=WGS84 +datum=WGS84 +units=m +no_defs Your Output user Defined Spatial Reference System should look like this: +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs Browse for the place where the shapefile is going to be and give the shapefile a name(like origna_reprojected). If it prompts you to add the projected layer to the TOC, accept. There, you have your re-projected map with latitude and longitude pair of coordinates. #5 Now, this is not the actual Spherical/Web Mercator projection, but dont worry, this is where you have to stop. All the other custom web-mapping portals will pick this projection and transform it into EPGS::3785 or EPSG:900913 but the coordinates will still remain as the LatLon pair of the projected shapefile. If you want to test, a particular know point, Q-GIS has a lot of room for that. Go ahead and test it.

    Read the article

  • Book Reviews: Art of Community and Eyetracking Web Usability

    - by ultan o'broin
    Holidays time offers a chance to catch up on some user experience and user assistance related material. So, two short book reviews (which I considered using my new Tumblr blog for. More about that another time) coming up. The Art of Community by Jono Bacon Excellent starting point for anyone wanting to get going in the community software (FLOSS, for example) space or understand how to set up, manage, and leverage the collective intelligence of communities for whatever ends. The book is a little too long in my opinion, and of course, usage of what Jono is recommending needs to be nuanced and adapted for enterprise applications space (hardly surprising there is a lot about Ubuntu, Lug Radio, and so on given Jono's interests). Shame there wasn't more information on international, non-English community considerations too. Still, some great ideas and insight into setting up and managing communities that I will leverage (watch out for the results on this blog, later in 2011). One section, on collaborative writing really jumped out. It reinforced the whole idea that to successful community initiatives are based on instigators knowing what makes the community tick in the first place. How about this for insight into user profiles for people who write community user assistance (OK then, "doc") and what tools they might use (in this case, we're talking about Jokosher): "Most people who write documentation for open source software projects would fall into the category of power user. They are technology enthusiasts who are not interested in the super-technical avenues of programming, but want to help out. Many of these people have good writing skills and a good knowledge of using the software, so the documentation fit is natural. With Jokosher we wanted to acknowledge this profile of user. As such, instead of focussing on complex text processing tools, we encouraged our documentation contributors to use a wiki." The book is available for free here, and well as being available from usual sources. Eyetracking Web Usability by Jakob Nielsen and Kara Prentice Another fine book by established experts. I have some field experience of eyetracking studies myself --in the user assistance for enterprise applications space--though Jakob and Kara concentrate on websites for their research here. I would caution how much about websites transfers easily to the applications space, especially enterprise applications, as claimed in the book too. However, Jakob and Kara do make the case very well that understanding design goals (for example, productivity improvement in the case of applications) and the context of the software use is critical. Executing a study using eyetracking technology requires that you know what you want to test, can set up realistic tasks for testing by representative testers, and then analyze the results. Be precise, as lots of data will be generated (I think the authors underplay the effort in analyzing data too). What I found disappointing was the lack of emphasis on eyetracking as only part of the usability solution. It's really for fine-tuning designs in my opinion, and should be used after other design reviews. I also wasn't that crazy about the level of disengagement between the qualitative and quantitative side of this kind of testing that the book indicated. I think it is useful to have testers verbalize their thoughts and for test engineers to prompt, intervene, or guide as necessary. More on cultural or international aspects to usability testing might have been included too (websites are available to everyone). To conclude, I enjoyed the book, took on board some key takeaways about methodologies and found the recommendations sensible and easy to follow (for example about Forms layouts). Applying enterprise applications requirements such as those relating to user profiles, design goals, and overall context of use in conjunction with what's in this book would be the way to go here. It also made me think of how interesting it would be to compare eyetracking findings between website and enterprise applications usage.

    Read the article

  • How do NTP Servers Manage to Stay so Accurate?

    - by Akemi Iwaya
    Many of us have had the occasional problem with our computers and other devices retaining accurate time settings, but a quick sync with an NTP server makes all well again. But if our own devices can lose accuracy, how do NTP servers manage to stay so accurate? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Photo courtesy of LEOL30 (Flickr). The Question SuperUser reader Frank Thornton wants to know how NTP servers are able to remain so accurate: I have noticed that on my servers and other machines, the clocks always drift so that they have to sync up to remain accurate. How do the NTP server clocks keep from drifting and always remain so accurate? How do the NTP servers manage to remain so accurate? The Answer SuperUser contributor Michael Kjorling has the answer for us: NTP servers rely on highly accurate clocks for precision timekeeping. A common time source for central NTP servers are atomic clocks, or GPS receivers (remember that GPS satellites have atomic clocks onboard). These clocks are defined as accurate since they provide a highly exact time reference. There is nothing magical about GPS or atomic clocks that make them tell you exactly what time it is. Because of how atomic clocks work, they are simply very good at, having once been told what time it is, keeping accurate time (since the second is defined in terms of atomic effects). In fact, it is worth noting that GPS time is distinct from the UTC that we are more used to seeing. These atomic clocks are in turn synchronized against International Atomic Time or TAI in order to not only accurately tell the passage of time, but also the time. Once you have an exact time on one system connected to a network like the Internet, it is a matter of protocol engineering enabling transfer of precise times between hosts over an unreliable network. In this regard a Stratum 2 (or farther from the actual time source) NTP server is no different from your desktop system syncing against a set of NTP servers. By the time you have a few accurate times (as obtained from NTP servers or elsewhere) and know the rate of advancement of your local clock (which is easy to determine), you can calculate your local clock’s drift rate relative to the “believed accurate” passage of time. Once locked in, this value can then be used to continuously adjust the local clock to make it report values very close to the accurate passage of time, even if the local real-time clock itself is highly inaccurate. As long as your local clock is not highly erratic, this should allow keeping accurate time for some time even if your upstream time source becomes unavailable for any reason. Some NTP client implementations (probably most ntpd daemon or system service implementations) do this, and others (like ntpd’s companion ntpdate which simply sets the clock once) do not. This is commonly referred to as a drift file because it persistently stores a measure of clock drift, but strictly speaking it does not have to be stored as a specific file on disk. In NTP, Stratum 0 is by definition an accurate time source. Stratum 1 is a system that uses a Stratum 0 time source as its time source (and is thus slightly less accurate than the Stratum 0 time source). Stratum 2 again is slightly less accurate than Stratum 1 because it is syncing its time against the Stratum 1 source and so on. In practice, this loss of accuracy is so small that it is completely negligible in all but the most extreme of cases. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Should I upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" and what care do I need to take if I upgrade?

    - by PHPLover
    I'm basically a Web Developer(PHP Developer) by profession. I mainly work on PHP, jQuery, AJAX, Smarty, HTML and CSS, Bootstrap front-end web development framework. I've also installed and using IDEs/editors like Sublime Text, NetBeans. I'm also using Git repository for my website development as a versioning tool. I'm using "Ubuntu 12.04 LTS" on my machine almost since last two years. My machine configuraion is as follows: Memory : 3.7 GiB Processor : Intel® Core™ i3 CPU M 370 @ 2.40GHz × 4 Graphics : Unknown OS type : 64-bit Disk : 64-bit The important softwares present on my machine and which I'm using daily for my work are as follows: PHP : PHP 5.3.10-1ubuntu3.13 with Suhosin-Patch (cli) (built: Jul 7 2014 18:54:55) Copyright (c) 1997-2012 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2012 Zend Technologies Apache web server : /usr/sbin/apachectl: 87: ulimit: error setting limit (Operation not permitted) Server version: Apache/2.2.22 (Ubuntu) Server built: Jul 22 2014 14:35:25 Server's Module Magic Number: 20051115:30 Server loaded: APR 1.4.6, APR-Util 1.3.12 Compiled using: APR 1.4.6, APR-Util 1.3.12 Architecture: 64-bit Server MPM: Prefork threaded: no forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="/etc/apache2" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="/var/run/apache2/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="mime.types" -D SERVER_CONFIG_FILE="apache2.conf" MySQL : 5.5.38-0ubuntu0.12.04.1 Smarty : 2.6.18 **NetBeans :** NetBeans IDE 8.0 (Build 201403101706) Sublime Text 2 : Version 2.0.2, Build 2221 Yesterday suddenly a pop-up message appeared on my screen asking me to upgrade to "Ubuntu 14.04 'Trusty Tahr'". I'd also be very happy to upgrade my system to "Ubuntu 14.04 'Trusty Tahr'". Following are the issues about which I'm little bit scared about and I need you all talented people's expert advice/help/suggestions on it: Will upgrading to "Ubuntu 14.04 'Trusty Tahr'" affect the softwares I mentioned above? I mean will I need to re-install/un-install and install these softwares too? Do I really need to and is it really a worth to upgrade to "Ubuntu 14.04 'Trusty Tahr'" from "Ubuntu 12.04 LTS" now? If I upgrade to "Ubuntu 14.04 'Trusty Tahr'" what advantage I'll get from web developer's point of view? Will the upgrade be hassle free and will I be ablr to continue my on-going work without any difficulties? Is "Ubuntu 14.04 'Trusty Tahr'" a LTS version and if yes till when it's going to provide support? These are the five crucial queries I have. If you want any further explanation from me please feel free to ask me. Thanks for spending some of your vaulable time in reading and understanding my issue. Any kind of help/comment/suggestion/answer would be highly appreciated. Though if someone gives canonical, precise and up to the mark answer, it will be of great help to me as well as other web developers using Ubuntu around the world. Once again thank you so much you great people around the globe. Waiting for your precious replies.

    Read the article

  • Read Mobi eBooks on Kindle for PC

    - by Matthew Guay
    Do you use your PC as a eBook reader?  Kindle for PC makes it easy to read thousands of books from the Kindle Store on your computer. What you may not know is that is also works with .mobi format too, so you can increase the amount of books you can read. Amazon has jumpstarted the eBook market with their popular Kindle device.  Last fall Amazon unveiled Kindle for PC, and we reviewed how you can Read Kindle Books On Your Computer with Kindle for PC.  Whether or not you own a Kindle or other eBook reader, this is a great way to take advantage of the thousands of eBooks available from the Kindle Store today. It supports azw, prc, and tpz format, which are sold from the Kindle store, but it also supports Mobipocket (.mobi) eBooks that are not DRM protected.  Here’s how you can add them to Kindle for PC so you can easily read them on your PC Getting Started: First, make sure you have Kindle for PC (link below) installed on your computer. Sign in with your Amazon account when you first run it. Kindle for PC lets you easily read eBooks downloaded from the Kindle Store, but it doesn’t have any way to add other eBooks directly from the program. To add eBooks, you can sometimes download and double-click on the books, and they will open in Kindle for PC and be automatically added to the library.  However, this does not always seem to work. So instead, browse to your Documents folder (simply click on the Documents link on your Start menu), and double-click on the My Kindle Content folder. This folder contains all the Kindle books you have downloaded.  If you have other eBooks you would like to add to Kindle for PC, simply drag-and-drop or copy and paste them into this folder.  Here we have a .mobi formatted book downloaded from the Gutenberg Project that we’re dragging into the folder. Now, close and reopen Kindle for PC.  It should now show your new eBook right beside the eBooks you have downloaded from the Kindle Store. These eBooks work just the same as the ones downloaded from the Kindle store, and you can change font size and add bookmarks just as with other eBooks. The eBooks downloaded this way may show up with either a Amazon logo or a mobile device icon.  You should only see the mobile device icon on .mobi files formatted for mobile devices; other ones should show up with the Amazon logo.  In this screen, Pilgrim’s Progress is a standard .mobi book, The Adventures of Sherlock Holmes is a mobipocket book, and the others are downloaded from the Kindle Store. Conclusion This is a great way to read eBooks from across the internet on Kindle for PC.  Wikipedia’s Kindle page has a list of websites that offer eBooks formatted for the Kindle, so be sure to check it out for more books. Links Download Kindle for PC List of websites that offer eBooks that will work on Kindle – via Wikipedia Similar Articles Productive Geek Tips Read Kindle Books On Your Computer with Kindle for PCInstall Adobe PDF Reader on Ubuntu EdgyHow to Access your Box.Net Account from Ubuntu the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV Replace Your Windows Task Manager With System Explorer Create Talking Photos using Fotobabble

    Read the article

  • MvcExtensions - PerRequestTask

    - by kazimanzurrashid
    In the previous post, we have seen the BootstrapperTask which executes when the application starts and ends, similarly there are times when we need to execute some custom logic when a request starts and ends. Usually, for this kind of scenario we create HttpModule and hook the begin and end request events. There is nothing wrong with this approach, except HttpModules are not at all IoC containers friendly, also defining the HttpModule execution order is bit cumbersome, you either have to modify the machine.config or clear the HttpModules and add it again in web.config. Instead, you can use the PerRequestTask which is very much container friendly as well as supports execution orders. Lets few examples where it can be used. Remove www Subdomain Lets say we want to remove the www subdomain, so that if anybody types http://www.mydomain.com it will automatically redirects to http://mydomain.com. public class RemoveWwwSubdomain : PerRequestTask { public RemoveWww() { Order = DefaultOrder - 1; } protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { const string Prefix = "http://www."; Check.Argument.IsNotNull(executionContext, "executionContext"); HttpContextBase httpContext = executionContext.HttpContext; string url = httpContext.Request.Url.ToString(); bool startsWith3W = url.StartsWith(Prefix, StringComparison.OrdinalIgnoreCase); bool shouldContinue = true; if (startsWith3W) { string newUrl = "http://" + url.Substring(Prefix.Length); HttpResponseBase response = httpContext.Response; response.StatusCode = (int)HttpStatusCode.MovedPermanently; response.Status = "301 Moved Permanently"; response.RedirectLocation = newUrl; response.SuppressContent = true; shouldContinue = false; } return shouldContinue ? TaskContinuation.Continue : TaskContinuation.Break; } } As you can see, first, we are setting the order so that we do not have to execute the remaining tasks of the chain when we are redirecting, next in the ExecuteCore, we checking the whether www is present, if present we are sending a permanently moved http status code and breaking the task execution chain otherwise we are continuing with the chain. Blocking IP Address Lets take another scenario, your application is hosted in a shared hosting environment where you do not have the permission to change the IIS setting and you want to block certain IP addresses from visiting your application. Lets say, you maintain a list of IP address in database/xml files which you want to block, you have a IBannedIPAddressRepository service which is used to match banned IP Address. public class BlockRestrictedIPAddress : PerRequestTask { protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { bool shouldContinue = true; HttpContextBase httpContext = executionContext.HttpContext; if (!httpContext.Request.IsLocal) { string ipAddress = httpContext.Request.UserHostAddress; HttpResponseBase httpResponse = httpContext.Response; if (executionContext.ServiceLocator.GetInstance<IBannedIPAddressRepository>().IsMatching(ipAddress)) { httpResponse.StatusCode = (int)HttpStatusCode.Forbidden; httpResponse.StatusDescription = "IPAddress blocked."; shouldContinue = false; } } return shouldContinue ? TaskContinuation.Continue : TaskContinuation.Break; } } Managing Database Session Now, let see how it can be used to manage NHibernate session, assuming that ISessionFactory of NHibernate is already registered in our container. public class ManageNHibernateSession : PerRequestTask { private ISession session; protected override TaskContinuation ExecuteCore(PerRequestExecutionContext executionContext) { ISessionFactory factory = executionContext.ServiceLocator.GetInstance<ISessionFactory>(); session = factory.OpenSession(); return TaskContinuation.Continue; } protected override void DisposeCore() { session.Close(); session.Dispose(); } } As you can see PerRequestTask can be used to execute small and precise tasks in the begin/end request, certainly if you want to execute other than begin/end request there is no other alternate of HttpModule. That’s it for today, in the next post, we will discuss about the Action Filters, so stay tuned.

    Read the article

  • Removing old kernel entries in Grub

    - by To Do
    I regularly delete old kernels leaving only the latest two entries using Synaptic. I'm using Precise. However in my Grub "previous Linux version" menu there are quite a few entries labelled 2.6.8. I cannot find these linux-images in Synaptic. dpkg -l | grep linux-image Gives: rc linux-image-3.0.0-17-generic 3.0.0-17.30 Linux kernel image for version 3.0.0 on x86/x86_64 ii linux-image-3.2.0-27-generic 3.2.0-27.43 Linux kernel image for version 3.2.0 on 32 bit x86 SMP ii linux-image-3.2.0-29-generic 3.2.0-29.46 Linux kernel image for version 3.2.0 on 32 bit x86 SMP ii linux-image-3.4.0-030400-generic 3.4.0-030400.201205210521 Linux kernel image for version 3.4.0 on 32 bit x86 SMP ii linux-image-generic 3.2.0.29.31 Generic Linux kernel image Sudo update-grub gives: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.4.0-030400-generic Found initrd image: /boot/initrd.img-3.4.0-030400-generic Found linux image: /boot/vmlinuz-3.2.0-29-generic Found initrd image: /boot/initrd.img-3.2.0-29-generic Found linux image: /boot/vmlinuz-3.2.0-27-generic Found initrd image: /boot/initrd.img-3.2.0-27-generic Found linux image: /boot/vmlinuz-2.6.38-11-generic Found initrd image: /boot/initrd.img-2.6.38-11-generic Found linux image: /boot/vmlinuz-2.6.38-10-generic Found initrd image: /boot/initrd.img-2.6.38-10-generic Found linux image: /boot/vmlinuz-2.6.38-8-generic Found initrd image: /boot/initrd.img-2.6.38-8-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows Vista (loader) on /dev/sda1 sudo apt-get remove linux-image-2.6.8-8-generic gives: E: Unable to locate package linux-image-2.6.8-8-generic E: Couldn't find any package by regex 'linux-image-2.6.8-8-generic' My boot folder contains the following: abi-2.6.38-10-generic initrd.img-3.4.0-030400-generic abi-2.6.38-11-generic memtest86+.bin abi-2.6.38-8-generic memtest86+_multiboot.bin abi-3.2.0-27-generic System.map-2.6.38-10-generic abi-3.2.0-29-generic System.map-2.6.38-11-generic abi-3.4.0-030400-generic System.map-2.6.38-8-generic config-2.6.38-10-generic System.map-3.2.0-27-generic config-2.6.38-11-generic System.map-3.2.0-29-generic config-2.6.38-8-generic System.map-3.4.0-030400-generic config-3.2.0-27-generic vmcoreinfo-2.6.38-10-generic config-3.2.0-29-generic vmcoreinfo-2.6.38-11-generic config-3.4.0-030400-generic vmcoreinfo-2.6.38-8-generic extlinux vmlinuz-2.6.38-10-generic grub vmlinuz-2.6.38-11-generic initrd.img-2.6.38-10-generic vmlinuz-2.6.38-8-generic initrd.img-2.6.38-11-generic vmlinuz-3.2.0-27-generic initrd.img-2.6.38-8-generic vmlinuz-3.2.0-29-generic initrd.img-3.2.0-27-generic vmlinuz-3.4.0-030400-generic initrd.img-3.2.0-29-generic and ls -l /etc/grub.d yields: total 56 -rwxr-xr-x 1 root root 6715 Apr 17 20:16 00_header -rwxr-xr-x 1 root root 5522 Oct 1 2011 05_debian_theme -rwxr-xr-x 1 root root 7407 May 17 09:22 10_linux -rwxr-xr-x 1 root root 6335 Apr 17 20:16 20_linux_xen -rwxr-xr-x 1 root root 1588 May 3 2011 20_memtest86+ -rwxr-xr-x 1 root root 7603 Apr 17 20:16 30_os-prober -rwxr-xr-x 1 root root 214 Oct 1 2011 40_custom -rwxr-xr-x 1 root root 95 Oct 1 2011 41_custom -rw-r--r-- 1 root root 483 Oct 1 2011 README gdisk -l /dev/sda yields: Partition table scan: MBR: MBR only BSD: not present APM: not present GPT: not present *************************************************************** Found invalid GPT and valid MBR; converting MBR to GPT format. *************************************************************** Disk /dev/sda: 312581808 sectors, 149.1 GiB Logical sector size: 512 bytes Disk identifier (GUID): F832A498-05E1-4615-B5B1-757ACB4A757A Partition table holds up to 128 entries First usable sector is 34, last usable sector is 312581774 Partitions will be aligned on 2048-sector boundaries Total free space is 4183661 sectors (2.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 61442047 29.3 GiB 0700 Microsoft basic data 3 163842048 169986047 2.9 GiB 8200 Linux swap 4 169986048 312578047 68.0 GiB 0700 Microsoft basic data 5 61444096 159666175 46.8 GiB 8300 Linux filesystem Please help with removing the old and inexistent kernels from Grub.

    Read the article

  • Developing an Implementation Plan with Iterations by Russ Pitts

    - by user535886
    Developing an Implementation Plan with Iterations by Russ Pitts  Ok, so you have come to grips with understanding that applying the iterative concept, as defined by OUM is simply breaking up the project effort you have estimated for each phase into one or more six week calendar duration blocks of work. Idea being the business user(s) or key recipient(s) of work product(s) being developed never go longer than six weeks without having some sort of review or prototyping of the work results for an iteration…”think-a-little”, “do-a-little”, and “show-a-little” in a six week or less timeframe…ideally the business user(s) or key recipients(s) are involved throughout. You also understand the OUM concept that you only plan for that which you have knowledge of. The concept further defined, a project plan initially is developed at a high-level, and becomes more detailed as project knowledge grows. Agreeing to this concept means you also have to admit to the fallacy that one can plan with precision beyond six weeks into a project…Anything beyond six weeks is a best guess in most cases when dealing with software implementation projects. Project planning, as defined by OUM begins with the Implementation Plan view, which is a very high-level perspective of the effort estimated for each of the five OUM phases, as well as the number of iterations within each phase. You might wonder how can you predict the number of iterations for each phase at this early point in the project. Remember project planning is not an exact science, and initially is high-level and abstract in nature, and then becomes more detailed and precise as the project proceeds. So where do you start in defining iterations for each phase for a project? The following are three easy steps to initially define the number of iterations for each phase: Step 1 => Start with identifying the known factors… …Prior to starting a project you should know: · The agreed upon time-period for an iteration (e.g 6 weeks, or 4 weeks, or…) within a phase (recommend keeping iteration time-period consistent within a phase, if not for the entire project) · The number of resources available for the project · The number of total number of man-day (effort) you have estimated for each of the five OUM phases of the project · The number of work days for a week Step 2 => Calculate the man-days of effort required for an iteration within a phase… Lets assume for the sake of this example there are 10 project resources, and you have estimated 2,536 man-days of work effort which will need to occur for the elaboration phase of the project. Let’s also assume a week for this project is defined as 5 business days, and that each iteration in the elaboration phase will last a calendar duration of 6 weeks. A simple calculation is performed to calculate the daily burn rate for a single iteration, which produces a result of… ((Number of resources * days per week) * duration of iteration) = Number of days required per iteration ((10 resources * 5 days/week) * 6 weeks) = 300 man days of effort required per iteration Step 3 => Calculate the number of iterations that can occur within a phase Next calculate the number of iterations that can occur for the amount of man-days of effort estimated for the phase being considered… (number of man-days of effort estimated / number of man-days required per iteration) = # of iterations for phase (2,536 man-days of estimated effort for phase / 300 man days of effort required per iteration) = 8.45 iterations, which should be rounded to a whole number such as 9 iterations* *Note - It is important to note this is an approximate calculation, not an exact science. This particular example is a simple one, which assumes all resources are utilized throughout the phase, including tech resources, etc. (rounding down or up to a whole number based on project factor considerations). It is also best in many cases to round up to higher number, as this provides some calendar scheduling contingency.

    Read the article

  • Ubuntu 12.04 / 12.10 Randomly Freezing - nVidia?

    - by Alix Axel
    My Ubuntu install frequently freezes, sometimes showing a black screen (not very common anymore - in my latest installs), some other times the mouse and keyboard just fail to move and respond (not even Ctrl + Alt + F1 works) and some other times I'm able to move the mouse with a huge delay (2-5 seconds) but I'm not able to do/click anything. I have a pretty strong feeling that this problem is related to my graphic card drivers because: after hard reset, I usually get error reports about X.org / jockey it's common for artifacts to appear during loading / shutdown / whenever, for instance: pattern filled with £ during log off ugly-colored squared pattern during boot windows that are partially moved (i.e.: only the top half) Firefox renderings that leave the bottom ~30% of the page black These artifacts appear right before the system freezes. I've installed Ubuntu 12.04 LTS and after several failed attempts to get my dual monitor setup to work properly I tried installing the new 12.10 version, hoping that this new version would have this problem solved... Unfortunatly, that was not the case, so I reverted to Ubuntu 12.04. I've tried all the drivers in the Additional Drivers application (even the experimental ones), I've also tried the nvidia-current package from the PPA repository ubuntu-x-swat/x-updates as well as the nouveau OSS driver. Nothing (except no driver at all with a 640*480 resolution) at all seems stable. Here is the info of my graphic card: alix@alix-E500:~$ lspci | grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation G86 [GeForce 8400M G] (rev a1) alix@alix-E500:~$ sudo lshw -C video [sudo] password for alix: *-display description: VGA compatible controller product: G86 [GeForce 8400M G] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:fd000000-fdffffff memory:d0000000-dfffffff memory:fa000000-fbffffff ioport:cc00(size=128) memory:fe0e0000-fe0fffff Right now, I don't even have my 22" monitor connected as I can't even get my laptop display to work properly and without freezes. I've searched, read and tried all that I could (over several fresh reinstalls) to fix the problem, but so far, no solution has proven definitive. I'm sorry I can't precise which symptom maps to each driver but I've been trying to solve this one on my own without logging what I'm doing, perhaps someone here will be able to point me to a certain-fix solution, if not I'll keep updating this question as I go along. Please let me know if any more info is needed to pinpoint the exact problem. Trying out NVIDIA accelerated graphics driver (version 173). The scrolling, minimizing / maximizing windows takes between 2 and 5 seconds to finalize. Context menus also pop up very slowly and the typing seems delayed by ~1 second. No critical issues so far. Firefox rendering of the Save Edits button is consistently messed up (random black lines in the top). Trying out NVIDIA accelerated graphics driver (version current) [Recommended]. All the delays mentioned above and the buggy rendering of the Save Edits button are gone, but I'm noticing that the whole screen flashes black for a couple of microseconds and while I was writing this test for the first time, the bottom 30% of the screen went black and I couldn't do anything (not even Ctrl + Alt + F1 would work). Had to force a hard reset. Also, the system hanged a little for a couple of seconds with the fade out of the "Restart" menu. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-304). Same symptoms as before, it crashed once while I was trying to install Chromium and again after a hard reset when I was trying to remove the driver. The bottom of the screen did not went black and I could move my mouse both times. Ctrl + Alt + F1 didn't work. The ugly-colored pattern also showed up during the second boot. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-307). The system crashed as soon as I clicked something. Had to do a fresh re-install. Trying out Nouveau: Accelerated Open Source driver for nVidia cards. Artifacts still show up during boot but other than that this one seems stable. As soon as I connected my second monitor, the responsiveness dropped a lot, animations and video are somewhat slow. I'm gonna try this solution http://askubuntu.com/a/98871/9018 later on.

    Read the article

  • GRUB 2 problem after Mac OS X update

    - by vallllll
    I have a MacBook Pro in dual boot Mac OS X / Ubuntu 12.04 (Precise Pangolin). When I boot it I have a rEFIt menu, and I can chose between Mac OS X and Linux. A few days ago I have updated Mac OS X from 10.7 (Lion) to 10.8 (Mountain Lion) using a .dmg image provided by my company. Since then when I select Linux in rEFIt it says: No bootable device --insert boot disk and press any key I have tried going to rEFIt partitioning tool. This is what I got: As suggested in Mac OSX Mavericks update rEFIT broken I wanted to fix the issue the same way as AndrewM, but I don't have the option "MBR table must be updated". Then I booted on Ubuntu 12.04 CD, chose repair broken system, chose root patition /dev/sda6 as this is where my Ubuntu file system is. I got a shell, but I don't really know how to repair the poblem since if it was just Windows dual boot. A GRUB update would solve the issue, but here I don't know where the GRUB 2 is installed. Here are results from Parted, and it is a bit confusing for me as the Mac partition is the one with boot: As you can see the entry 1 is an EFI system partition and is the boot partition, so I wonder if I should install GRUB there or in sda6, which is the Ubuntu filesystem. I am not sure should I work on rEFIt shell or Ubuntu. Unfortunately, I don't remember where GRUB was before update. UPDATE: using same link above I have tried RoundSparrow hilltx answer and installed rEFInd, but the result is same.... still no bootable device when I select Linux. UPDATE 2: just used alternate CD again, mounted on /dev/sda6 and the ran update-grub. It seemed to wok and started listing all my kernels. But after rebooting several times still no bootable device when I select Linux in rEFInd. UDATE 3: Have tried to boot from Ubuntu cd and select "boot from first available filesystem. I got error and dropped to grub rescue shell. I even followed the indications on this link but was unable to boot as I tried to use sdb6 but no luck UPDATE 4 as per Rob Smith request here is out put from ls -l $(find /EFI -iname "*.efi") *MACOSX -rw-r--r--@ 1 root admin 55048 29 oct 17:44 /EFI/refind/drivers_x64/btrfs_x64.efi -rw-r--r--@ 1 root admin 38888 29 oct 17:44 /EFI/refind/drivers_x64/ext2_x64.efi -rw-r--r--@ 1 root admin 39304 29 oct 17:44 /EFI/refind/drivers_x64/ext4_x64.efi -rw-r--r--@ 1 root admin 43432 29 oct 17:44 /EFI/refind/drivers_x64/hfs_x64.efi -rw-r--r--@ 1 root admin 38984 29 oct 17:44 /EFI/refind/drivers_x64/iso9660_x64.efi -rw-r--r--@ 1 root admin 43656 29 oct 17:44 /EFI/refind/drivers_x64/reiserfs_x64.efi -rw-r--r--@ 1 root admin 175016 29 oct 17:44 /EFI/refind/refind_x64.efi -rw-rw-r-- 1 root admin 73232 7 mar 2010 /EFI/tools/dbounce.efi -rw-rw-r-- 1 root admin 763248 7 mar 2010 /EFI/tools/dhclient.efi -rw-rw-r-- 1 root admin 67024 7 mar 2010 /EFI/tools/drawbox.efi -rw-rw-r-- 1 root admin 71312 7 mar 2010 /EFI/tools/dumpfv.efi -rw-rw-r-- 1 root admin 84848 7 mar 2010 /EFI/tools/dumpprot.efi -rw-rw-r-- 1 root admin 472912 7 mar 2010 /EFI/tools/ed.efi -rw-rw-r-- 1 root admin 143856 7 mar 2010 /EFI/tools/edit.efi -rw-rw-r-- 1 root admin 1801008 7 mar 2010 /EFI/tools/ftp.efi -rw-r--r--@ 1 root admin 47848 29 oct 17:44 /EFI/tools/gptsync_x64.efi -rw-rw-r-- 1 root admin 320560 7 mar 2010 /EFI/tools/hexdump.efi -rw-rw-r-- 1 root admin 286384 7 mar 2010 /EFI/tools/hostname.efi -rw-rw-r-- 1 root admin 534416 7 mar 2010 /EFI/tools/ifconfig.efi -rw-rw-r-- 1 root admin 395344 7 mar 2010 /EFI/tools/loadarg.efi -rw-rw-r-- 1 root admin 587408 7 mar 2010 /EFI/tools/ping.efi -rw-rw-r-- 1 root admin 730416 7 mar 2010 /EFI/tools/pppd.efi -rw-rw-r-- 1 root admin 561360 7 mar 2010 /EFI/tools/route.efi -rw-rw-r-- 1 root admin 1961712 7 mar 2010 /EFI/tools/shell.efi -rw-rw-r-- 1 root admin 750224 7 mar 2010 /EFI/tools/tcpipv4.efi -rw-rw-r-- 1 root admin 4048 7 mar 2010 /EFI/tools/textmode.efi -rw-rw-r-- 1 root admin 320656 7 mar 2010 /EFI/tools/which.efi *LINUX

    Read the article

  • SQL SERVER – Introduction to Big Data – Guest Post

    - by pinaldave
    BIG Data – such a big word – everybody talks about this now a days. It is the word in the database world. In one of the conversation I asked my friend Jasjeet Sigh the same question – what is Big Data? He instantly came up with a very effective write-up.  Jasjeet is working as a Technical Manager with Koenig Solutions. He leads the SQL domain, and holds rich IT industry experience. Talking about Koenig, it is a 19 year old IT training company that offers several certification choices. Some of its courses include SharePoint Training, Project Management certifications, Microsoft Trainings, Business Intelligence programs, Web Design and Development courses etc. Big Data, as the name suggests, is about data that is BIG in nature. The data is BIG in terms of size, and it is difficult to manage such enormous data with relational database management systems that are quite popular these days. Big Data is not just about being large in size, it is also about the variety of the data that differs in form or type. Some examples of Big Data are given below : Scientific data related to weather and atmosphere, Genetics etc Data collected by various medical procedures, such as Radiology, CT scan, MRI etc Data related to Global Positioning System Pictures and Videos Radio Frequency Data Data that may vary very rapidly like stock exchange information Apart from difficulties in managing and storing such data, it is difficult to query, analyze and visualize it. The characteristics of Big Data can be defined by four Vs: Volume: It simply means a large volume of data that may span Petabyte, Exabyte and so on. However it also depends organization to organization that what volume of data they consider as Big Data. Variety: As discussed above, Big Data is not limited to relational information or structured Data. It can also include unstructured data like pictures, videos, text, audio etc. Velocity:  Velocity means the speed by which data changes. The higher is the velocity, the more efficient should be the system to capture and analyze the data. Missing any important point may lead to wrong analysis or may even result in loss. Veracity: It has been recently added as the fourth V, and generally means truthfulness or adherence to the truth. In terms of Big Data, it is more of a challenge than a characteristic. It is difficult to ascertain the truth out of the enormous amount of data and the one that has high velocity. There are always chances of having un-precise and uncertain data. It is a challenging task to clean such data before it is analyzed. Big Data can be considered as the next big thing in the IT sector in terms of innovation and development. If appropriate technologies are developed to analyze and use the information, it can be the driving force for almost all industrial segments. These include Retail, Manufacturing, Service, Finance, Healthcare etc. This will help them to automate business decisions, increase productivity, and innovate and develop new products. Thanks Jasjeet Singh for an excellent write up.  Jasjeet Sign is working as a Technical Manager with Koenig Solutions. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Big Data

    Read the article

  • no launcher, no dash, no unity, how to get back to my desktop?

    - by Numan Syed
    FYI, I have tried these, but none worked as of yet! Please help! I must not want to reinstall Ubuntu precise: AskUbuntu:Unity Launcher missing AskUbuntu: Unity doesn't load Youtube:Restore missing launcher AskUbuntu:Unity 3D no longer works! Is there any other way to find a solution, any help is highly appreciated! Please do ask for any further info u may need to point me to a better direction. Edit: I have still the opportunity to use ctrl+Alt+T for the terminal. And from there I used firefox & to get the browser on. Edit 2: Tried to find more; found more explained situation http://askubuntu.com/q/260578/176470. Edit 3:@Adithya: tried that no luck! Here is what my terminal gave me so far... [1447:22] (~) bash $ unity --reset WARNING: Unity currently default profile, so switching to metacity while resetting the values unity-panel-service: no process found Checking if settings need to be migrated ...no Checking if internal files need to be migrated ...no Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done compiz (core) - Warn: failed to receive ConfigureNotify event on 0x1600004 compiz (core) - Warn: failed to receive ConfigureNotify event on 0x30000b8 compiz (core) - Warn: failed to receive ConfigureNotify event on 0x2c00fc1 Initializing composite options...done Initializing opengl options...done Initializing decor options...done Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done compiz (core) - Error: Couldn't load plugin '/usr/lib/compiz/libunitymtgrabhandles.so' : /usr/lib/compiz/libunitymtgrabhandles.so: undefined symbol: _ZN10CompOption7setNameEPKcNS_4TypeE compiz (core) - Error: Couldn't load plugin 'unitymtgrabhandles' Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done compiz (core) - Error: Couldn't load plugin '/usr/lib/compiz/libunityshell.so' : /usr/lib/compiz/libunityshell.so: undefined symbol: _ZN10CompOption7setNameEPKcNS_4TypeE compiz (core) - Error: Couldn't load plugin 'unityshell' compiz (core) - Warn: unhandled ConfigureNotify on 0xc000a0! compiz (core) - Warn: this should never happen. you should probably file a bug about this. compiz (core) - Warn: unhandled ConfigureNotify on 0xc000a3! compiz (core) - Warn: this should never happen. you should probably file a bug about this. compiz (core) - Warn: unhandled ConfigureNotify on 0xc000a6! compiz (core) - Warn: this should never happen. you should probably file a bug about this. Initializing addhelper options...done Initializing animationaddon options...done Initializing annotate options...done Initializing bench options...done Initializing blur options...done Initializing clone options...done Initializing colorfilter options...done Initializing commands options...done Initializing crashhandler options...done Initializing cube options...done Initializing cubeaddon options...done Initializing extrawm options...done Initializing fadedesktop options...done Initializing firepaint options...done Initializing group options...done Initializing imgjpeg options...done Initializing kdecompat options...done Initializing loginout options...done Initializing mag options...done Initializing maximumize options...done Initializing mblur options...done Initializing neg options...done Initializing notification options...done Initializing obs options...done Initializing opacify options...done Initializing put options...done Initializing reflex options...done Initializing resizeinfo options...done Initializing ring options...done Initializing rotate options...done Initializing scaleaddon options...done Initializing scalefilter options...done Initializing screenshot options...done Initializing shelf options...done Initializing shift options...done Initializing showdesktop options...done Initializing showmouse options...done Initializing splash options...done Initializing staticswitcher options...done Initializing switcher options...done Initializing td options...done Initializing thumbnail options...done Initializing trailfocus options...done Initializing unitymtgrabhandles options...done Initializing unityshell options...done Initializing wallpaper options...done Initializing water options...done Initializing widget options...done Initializing winrules options...done Initializing wobbly options...done Setting Update "main_menu_key" Setting Update "run_key" Anything suspicious herein?

    Read the article

  • Laptop monitor stopped working and can't be re-enabled on a Dell Latitude E6410

    - by xektrum
    I'm using Ubuntu 12.04 (upgraded from 11.10), everything seemed to work fine until today when my laptop monitor suddenly stopped working. Here are the facts: My laptop is a Dell Latitude E6410, Intel graphics. External Monitor is attached through a docking station. Everything worked fine for about 6-7 month, then upgraded to 12.04 Issue started today after a week of upgrade. I think the issue started after I ran CounterStrike 1.6, both monitors blinked and then only the attached monitor which is connected to a docking station continued to work I thought at first that was a transient issue but then I've rebooted, removed the battery but the same happens. Laptop Monitor and external monitor work fine up to login screen, but after I login it goes black Whenever I try to re-enable laptop monitor from Display Manager I get errors: The selected configuration for displays could not be applied could not set the configuration for CRTC 63 Not sure what technical details are required but here are some: $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected (normal left inverted right x axis y axis) 1440x900 60.0 + 40.0 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis) $ tail /var/log/Xorg.0.log [ 8367.132] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.132] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.174] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.174] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy [ 8367.265] (WW) intel(0): flip queue failed: Device or resource busy [ 8367.265] (WW) intel(0): Page flip failed: Device or resource busy I'm using gnome-shell, and the only ways I've been able to get both display working have been: 1) Booting with laptop disconnected from docking and then re attach external with VGA instead of DVI, but only worked for a session. 2) Removing xserver-xorg-video-intel, but then I gnome-shell is gone as well as dri I would appreciate any suggestions. Regards, ============================= WORKAROUND FOUND ============================= So I have tried few things and here is what worked: I've installed a newer version of xserver-xorg-video-intel (2.19 vs 2.17) from ppa:xorg-edgers/ppa, it didn't work at first, it was only showing low graphics mode, so I tried with a different linux-image 3.0.0-19-generic-pae instead of 3.2.0-24-generic-pae, which I believe is 12.04 precise default, then everything started to work again, Now I've installed 3.4.0-1-generic-pae from same ppa and everything goes flawless so I believe the issue is either with linux-image 3.0.0-19-generic-pae or xserver-xorg-video-intel 2.17. Hope this helps someone in the future. PS: Now xrandr shows multiple modes for my laptop monitor $ xrandr Screen 0: minimum 320 x 200, current 3120 x 1050, maximum 8192 x 8192 eDP1 connected 1440x900+1680+0 (normal left inverted right x axis y axis) 303mm x 189mm 1440x900 60.0*+ 59.9 40.0 1360x768 59.8 60.0 1152x864 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) HDMI2 disconnected (normal left inverted right x axis y axis) DP2 disconnected (normal left inverted right x axis y axis)

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • Must go through Windows Boot Loader to get to Grub

    - by Zach
    I just installed a fresh copy of Precise alongside Windows 7. I have to separate 750GB hard drives; /dev/sda holds the Windows partitions and /dev/sdb holds the Ubuntu partitions. Other than that, these are fresh installs of both Windows 7 and Ubuntu 12.04. Whenever I boot, Grub doesn't load, instead it goes to a black screen with a single blinking (horizontal bar) cursor in the top right corner. However, if I boot, hit escape right as the BIOS/POST screen finishes up, see the Windows Boot Loader and hit escape to make it go back to the BIOS screen. After the BIOS screen, grub shows up and everything functions normally; I can boot into Ubuntu or Win7. I don't want to have to do the Escape, Escape, Wait, Boot trick every time. I have no idea what would be wrong or what information I could give you guys to help diagnose. I have run a sudo update-grub and it found everything normally. I tried adding nomodeset flag in the /etc/default/grub line GRUB_CMDLINE_LINUX_DEFAULT which searching around made me think might work. Thoughts on what I could do to fix this? EDIT: I've tried changing the boot order so that both drives in the BIOS (both are labeled as "Internal HDD") have had a try booting first. I think the problem may be that every time I boot, the BIOS boot order is different... and I have to reset it. It seems to not be stable... but I'm not sure how to go about fixing that either. The machine has both traditional BIOS and UEFI. It came standard in "Legacy" mode; so it is currently set to boot through Legacy mode. I've reinstalled Ubuntu now, and now if I hit escape at the end of the BIOS/POST startup screen, it takes me to GRUB menu. Otherwise it automatically loads Windows. It seems like GRUB is now the acting bootloader, it just doesn't automatically start that unless I ask it to open a bootloader. In my other machines, it has always automatically started at the end of BIOS/POST. EDIT2: Using gparted, I just looked at my partitions, it would seem that my linux-swap partition is currently flagged as the boot partition for my Ubuntu install. I currently only have 2 partitions: one of "ext4" with a mount point of "/" and flag " "; and the "linux-swap" with mount point " " and flag "boot." If I change the boot flag to be on "/," it does not reliably solve the problem. After 10 boots: 2 Booted successfully to GRUB 5 Booted directly to Windows 7 3 booted to the black screen with the cursor and hung there Further research makes me think this is an issue of the BIOS not reliably booting hard drives in the same order or not finding both hard drives. If I ask it to create a "boot menu" sometimes it has 2 entries for "Internal HDD," sometimes 1. Also the list it creates changes order every time I bring it up; so it is not following a consistent boot sequence. Will report back if this is not an issue with GRUB.

    Read the article

  • User password rejected on login screen but accepted on text console login

    - by MadsirR
    I had to force shutdown my Ubuntu 12.04 64-bit, after which I restarted and tried to log in as a normal user, which was rejected several times. I then logged in as guest and tty to my regular account with use of my normal password, which succeeded. (So the password is still valid.) How can I gain access again via the normal login procedure (welcome screen)? Update: When I tried to log on with my new password, it again was denied. When I deliberately tried to log on with a faulty password, an error message came back, saying: Access denied - wrong password. I suppose, the first time the password was not rejected, but the procedure was aborted for some reason. Some additional info after trying to find a solution: I am conviced it is a Compiz-issue. Why? before this happened, all sessions came to a grinding halt, regardless of being logged on in a 2D or 3d environment. I found a link saying that I should remove Compiz and proceed in a 2D environment, which initiall worked without a glitch, until my system went into a state of total obivium. Only after that, the above mentioned troubles appeared. In the meantime I have happened to find a thread with reference 17381, describing exactly what I have experienced. For now, I will try to cure this situation (later this week) and revert with the results, hopefully to close this post. In the meantime I cordially thank you all, even if you didn't kill the problem; you gave me the inspiration to look further and find a possible cure. Update2: After 15 hrs of trial-and-error I callled it quits (When I decided to tackle this problem, I've given myself 12 hrs, to avoid massive loss of time.) I decided to re-install Precise, since the "point 1" version has become availabe. Log-in is back to normal, as is the graphic environment. Response to mouse input is stil appalling, especially when I have a series of screens open as "children"of a "parent" screen. It still completely locks up. I have installed Enlightenment, Gnome classic, Gnome 3, Cinnamon and they all behave in a similar fashion. FOR THOSE WHO NEED A WAY-OUT IN SITUATIONS OF THE LIKE: Open a terminal with [Ctrl+Alt+F2]. Type [sudo killall Firefox] (or whatever application you wish to terminate). Key in your password. Return to your graphical screen with [Ctrl+Alt+F7], and Bob's your uncle. Just re-open Firefox like nothing happened. Next time you are stuck: [Ctrl+Alt+F2], upward arrow till you meet the command of your desire, [Ctrl+Alt+F7], etcetera. Hope this is of help. My next move will be to upgrade the kernel to 3.4 from the repositories for 12.10. However, since this entails a totally new situation, I will start a new thread on this site to avoid topic pollution I will keep you posted. Still.

    Read the article

  • How to get faster graphics in KVM? VNC is painfully slow with Haiku OS guest, Spice won't install and SDL doesn't work

    - by Don Quixote
    I've been coming up to speed on the Haiku operating system, an Open Source clone of BeOS 5 Pro. I'm using an Apple MacBook Pro as my development machine. Apple's BootCamp BIOS does not support more than four partitions on the internal hard drive. While I can set up extended and logical partitions, doing so will prevent any of the installed operating systems from booting. To run Haiku directly on the iron, I boot it off a USB stick. Using external storage is also helpful because I am perpetually out of filesystem space. While VirtualBox is documented to allow access to physical drives, I could not actually get it to work. Also VirtualBox can only use one of the host CPU's cores. While VB guests can be configured for more than one CPU, they are only emulated. A full build of the Haiku OS takes 4.5 under VB. I had the hope of reducing build times by using KVM instead, but it's not working nearly as well as VirtualBox did. The Linux Kernel Virtual Machine is broken in all manner of fundamental ways as seen from Haiku. But I'm a coder; maybe I could contribute to fixing some of those problems. The first problem I've got is that Haiku's video in virt-manager is quite painfully slow. When I drag Haiku windows around the desktop, they lag quite far behind where my mouse is. It's quite difficult to move a window to a precise position on the screen. Just imagine that the mouse was connected to the window title bar with a really stretchy spring. Also Haiku's mouse lags quite far behind where I have moved it. I found lots of Personal Package Archives that enable Spice from QEMU / KVM at the Ubuntu Personal Package Arhives. I tried a few of the PPAs but none of them worked; with one of them, the command "add-apt-repository" crashed with a traceback. There is a Wiki page about Spice, but it says that it only works on 64-bit. My Early 2006 MacBook Pro is 32-bit. Its Apple Model Identifier is MacBookPro1,1; these use Core Duos NOT Core 2 Duos. I don't mind building a source deb for 32-bit if I can expect it to work. Is there some reason that Spice should be 64-bit only? Does it need features of the x86_64 Instruction Set Architecture that x86 does not have? When I try using SDL from virt-manager, the configuration for Local SDL Window says "Xauth: /home/mike/.Xauthority". When I try to start my guest, virt-manager emits an error. When I Googled the error message, the usual solution was to make ~/.Xauthority readible. However, .Xauthorty does not exist in my home directory. Instead I have a $XAUTHORITY environment variable. There is no way to configure SDL in virt-manager to use $XAUTHORITY instead of ~/.Xauthority. Neither does it work to copy the value of $XAUTHORITY into the file. I am ready to scream, because I've been five fscking days trying to make KVM work for Haiku development. There is a whole lot more that is broken than the slow video. All I really want to do for now is speed up my full builds of Haiku by using "jam -j2" to use both cores in my CPU. I may try Xen next, but the last time I monkeyed with Xen it was far, far more broken than I am finding KVM to be. Just for now, I would be satisfied if there were some way to use my USB stick as a drive in VirtualBox. VB does allow me to configure /dev/sdb as a drive, but it always causes a fatal error when I try to launch the guest. Thank You For Any Advice You Can Give Me. -

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >