Search Results

Search found 22463 results on 899 pages for 'sub query'.

Page 543/899 | < Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >

  • How to use the Nautilus search option

    - by Luis Alvarado
    In Nautilus if I press CTRL+F I get a search box that helps me search in the current directory and sub directories for specific names, but what if I want to: Find ALL files (including files without extensions) Find a file without an extension (Without the dot symbol or without any other name/extension separator) Find a file with/without a special character Find all files that start/not start with a character Find all files that end/not end with a character Find all files that start/no start with a character but end/not end with a character Find only files/folders Find files with specific text in them Find files with less/more/equal than/to X size Find files modified/created in X date All of this searches in the Nautilus search box I mentioned before. I ask this since the KDE's search is much better in this and gives pretty good freedom in searching for virtually anything, so I might not be learning how to use the Nautilus search option correctly. Note that I am talking about the first search done since some of this options show AFTER a search is done so the user can narrow it down more by doing a more specific search inside the Search results (for the first search). I am asking here how to do any of the search options I mentioned above in the first search.

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • Booting sequence. Ubuntu 12.04 installation and cohabitation with former OSes

    - by Stephane Rolland
    I am on the brink of installing Ubuntu 12.04 Precise Pengolin on the first primary partition of my hard-drive. (A day in History for me since I had always kept a MS windows at this first place). But I have some fears: This is my last computer available (In the past I used to have 2 or even 3 machines so I could always un/plug HDs for recovery operations and rescue) The current booting sequence is not straight forard. So as to explain the boot sequence let me briefly sum-up the history of this laptop computer. It was a dedicated Windows Vista computer. 1st and only Primary partition. Then I added Windows 7 (on the 2nd primary partition) letting the Windows Vista Boot Loader manage the boot sequence. Then I added Ubuntu 10.04 Lucid Lynx on the 1st sub-partition of the Extended Partitionm asking Grub to be the boot loader. But when I ask Grub to launch windows it launches the Vista BootLoader that manages the choice betzeen Vista and 7. So in theory Grub is on the MasterBootRecord - though I understand where the Vista BootLoader remains. Now, I will no longer use the Ubuntu 10.04 ( on extended partition) and also the Windows vista (on the first primary partition). I will install Ubuntu 12.04 on the First Primary, asking it to install a new bootloader. I want to keep the Windows 7 that is already on the Second Primary partition. And I want it to be loaded by the Ubuntu Boot loader(I don4t knoz zhich is included in this version)... And I am afraid the last point will not work.

    Read the article

  • Cannot use apt-get/dpkg -- Input/output error

    - by mecho
    I have bumped into an issue that doesn't allow me to do anything related to apt-get: install, remove, etc. Whenever I try to do something (e.g. sudo apt-get install firefox -f) it gives me the same error message: Reading database ... dpkg: unrecoverable fatal error, aborting: unable to open files list file for package `fontconfig-config': Input/output error E: Sub-process /usr/bin/dpkg returned an error code (2) I have tried to deal with the package `fontconfig-config' without success. I have found that the "Input/output error" is usually linked with physical problems of the hd, but I do not think that's the case I am using that hd without any other problem. I have tried removing the mention to the package in /var/lib/dpkg/status as mentioned here. I have tried deleting all files related to the package in /var/lib/dpkg/info as I found somewhere. But I still cannot do anything. The funny bit comes when I look for the file that is giving me troubles: mecho@Ansible-MS-7680:/var/lib/dpkg/info$ ls fontconfig* ls: cannot access fontconfig-config.list: Input/output error fontconfig.list fontconfig.postinst fontconfig.preinst fontconfig.triggers fontconfig.md5sums fontconfig.postrm fontconfig.prerm This is done after I deleted all files ... it looks like fontconfig-config.list still exists but it doesn't show up! Any idea about how to solve the problem? I am on kubuntu precise, fontconfig-config_2.8.0-3ubuntu9.1

    Read the article

  • Failing to upgrade to linux-image-3.0.0-26-generic

    - by Dan Lee
    When I try to upgrade linux-image-3.0.0-26-generic I get following problems: dpkg-deb (subprocess): data: internal bzip2 read error: 'DATA_ERROR' dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/linux-image-3.0.0-26-generic_3.0.0-26.42_amd64.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./lib/modules/3.0.0-26-generic/kernel/drivers/scsi/fnic/fnic.ko' No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.0.0-26-generic /boot/vmlinuz-3.0.0-26-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.0.0-26-generic /boot/vmlinuz-3.0.0-26-generic Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.0.0-26-generic_3.0.0-26.42_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) A package failed to install. Trying to recover: dpkg: dependency problems prevent configuration of linux-image-generic: linux-image-generic depends on linux-image-3.0.0-26-generic; however: Package linux-image-3.0.0-26-generic is not installed. I don't know why this happens to me; earlier upgrades always worked without problems. Does anybody know how to fix this?

    Read the article

  • Hotspotting - tying Visualization into Other applications

    - by warren.baird
    AutoVue 20 included our first step towards providing a rich hotspotting capability that will allow visualization capabilities to be very tightly integrated into a wide range of applications. The idea is to have a close link between the visual representation of an object or place, and the business objects associated with that object or place. We've been working with our partner Enigma to enable this capability in their parts catalogue - the screenshot above shows what it looks like - the image on the right is a trimmed down version of AutoVue displaying a drawing of the various parts in an interactive way - when you click on item '6' in the AutoVue drawing, the appropriate item is highlighted in the parts catalogue - making it easy to select the parts you need, and to ensure that the correct parts are selected. The integration works in both directions - when you select a part in the part catalogue, the appropriate part is highlighted in the drawing as well. To get slightly technical for a moment, this is a simple javascript integration - the external application provides a javascript callback that AutoVue calls whenever an item is clicked on, and AutoVue provides a javascript function to call when an item is selected in the external application. There are also direct java APIs available. This makes it easy to tie AutoVue into many types of applications - you can imagine in an asset lifecycle management application being able to click on the appropriate asset in a drawing to create a work-order, instead of finding the right asset ID to enter. Or being able to click on a part or sub-assembly to trigger a change order in a product lifecycle management application. We're pretty excited about the possibilities that this capability opens up, and plan on expanding on it a lot in the future. Would this be useful in your enterprise applications? What kinds of integrations like this would be useful for you? Let us know in the comments below!

    Read the article

  • Essential Links for the SharePoint Client Side Developer

    - by Mark Rackley
    Front End Developer? Client Side Developer? Middle Tier??? I’m covering all my bases.  Regardless, I’m sick and tired of Googling with Bing when I forget where information that I need often is located. I was getting ready to bookmark some of them when it hit me… “Hey Mark… (I don’t actually refer to myself in the third person), Why don’t you put the links in a blog so that it looks like you are being helpful!” I can’t tell you how many times I’ve had to go back to some of my old blogs to remember how I did something. Seriously people, you need to start a blog, it’s the best way to remember how the frick you got something to work… and it looks like you are being helpful when in reality you are just forgetful.  So… where was I? Oh yeah.. essential information that I’ve needed from time to time when I was not using Visual Studio. All of this info has come in handy from time to time. Know about these things and keep them in your tool belt, it’s amazing the stuff you can accomplish with just knowing where to look. What Why SPServices Widely used library written by Marc Anderson used to call SharePoint Web Services with jQuery jQuery For SPServices and other cool stuff Easy Tabs Essential tool for quick page enhancements. This widely used too from Christophe Humbert groups multiple web parts into one tabbed display. Very quick and easy way to get oohs and ahs from End Users. Convert Calculated Columns to HTML Also from Christophe, I use this script all the time to convert html in my calculated columns to actually display as html and not with the tags. Unlocking the Mysteries of Data View Web Part XSL Tags This blog series from Marc Anderson makes it very easy to understand what’s going on with all those weird xsl tags in your data view web parts. Essential to make those things do what you want them to do. Creating Parent / Child list relationships (2007) Creating Parent / Child list relationships (2010) By far my most viewed blog posts (tens and tens of thousands).  I have posts for both 2007 and 2010 that walk you through automatically setting the lookup id on a list to its “parent”. Set SharePoint Form fields using Query String Variables Also widely read, this one walks you through taking a variable from your Query String and set a form field to that value.   Hmmm… I KNOW there are more, but I’m tired and drawing a blank.  I’ll try to add them when I remember them (or need them again and think “Oh, I forgot to add that one”) But it’s a start, and please feel free to add your own in the comments… So, it’s YOUR turn to be helpful. What little tip or trick do you find yourself using ALL the time that you think everyone should know about??

    Read the article

  • New features in TFS Demo Setup 1.0.0.2

    - by Tarun Arora
    Release Notes – http://tfsdemosetup.codeplex.com/ | Download | Source Code | Report a Bug | Ideas Just pushed out the 2nd release of the TFS Demo setup on CodePlex, below a quick look at some of the new features/improvements in the tool… Details of the existing features can be found here. Feature 1 – Set up Work Items Queries as Team Favorites The task board looks cooler when the team favourite work item queries show up on the task board. The demo setup console application now has the ability to set up the work item queries as team favorites for you. If you want to see how you can add Team Favorites programmatically, refer to this blogpost here. Image 1 – Task board without Team Favorites Let’s see how the TFS Demo Setup application sets-up team favorites as part of the run… Open up the DemoDictionary.xml and you should be able to see the new node <TeamFavorites> this accepts multiple <TeamFavorite>. You simply need to specify the <Type> as Query and in the <Name> specify the name of the work item query that you would like added as a favorite. Image 2 – Highlighting the TeamFavorites block in DemoDictionary.xml So, when the demo set up application is run with the above config, work item queries “Blocked Tasks” and “Open Impediments” are added as team favorites. They then show up on the task board, as highlighted in the screen shot below. Image 3 – Team Favorites setup during the TFS demo setup app execution Feature 2 – Choose what you want to setup and exclude the rest I had a great feature request come in requesting the ability to exclude parts of the setup at the sole discretion of the executioner. To accommodate this, I have added an attribute with each block, the attribute “Run” accepts “true” or “false”. If you set the flag to true then at the time of execution that block would be considered for setup and if you set the flag to false, the block will be ignored during the setup. So, lets look at an example below… The attribute "Run” is set to true for TeamSettings, Team Favorites, TeamMembers and WorkItems. So, all of these would be setup as part of the demo setup application execution. Image 4 – New Attribute Run added to all blocks in DemoDictionary.xml If I did not want to recreate the team and did not want to add new work items but only wanted to add favorites and team members to the existing team “AgileChamps1” then I could simple run the application with below DemoDictionary.xml. Note – TeamSettings Run=”false” and WorkItems Run=”false”. Image 5 – TeamFavorites and TeamMembers set as true and others set to false Feature 3 – Usability Improvement If you try and assign a work item to a team member that does not exist then the application throws a nasty exception. This behaviour has now been changed, upon adding such a work item, the work items will be created and not assigned to any user. The work item id will be printed to the console making it simple for you to assign the work item manually. As you can see in the screen shot below, I am trying to assign the work item to a user “Tarun” and a user “v2” both are *not valid users in my team project collection* so the tool creates the work items and provides me the work item id and lets me know that since the user is invalid the work item could not be assigned to the user. Better user experience ae Image 6 – Behaviour if work item assigned to users are in valid users in team project That’s about it for the current release. I have some new features planned for the next release. Mean while if you have any ideas/comments please feel free to leave a comment. Stay tuned for more… Enjoy! Other posts on TFS Demo Setup can be found here.

    Read the article

  • Restrict number of characters to be typed for af:autoSuggestBehavior

    - by Arunkumar Ramamoorthy
    When using AutoSuggestBehavior for a UI Component, the auto suggest list is displayed as soon as the user starts typing in the field. In this article, we will find how to restrict the autosuggest list to be displayed till the user types in couple of characters. This would be more useful in the low latency networks and also the autosuggest list is bigger. We could display a static message to let the user know that they need to type in more characters to get a list for picking a value from. Final output we would expect is like the below image Lets see how we can implement this. Assuming we have an input text for the users to enter the country name and an autosuggest behavior is added to it. <af:inputText label="Country" id="it1"> <af:autoSuggestBehavior /> </af:inputText> Also, assuming we have a VO (we'll name it as CountryView for this example), with a view criteria to filter out the VO based on the bind variable passed. Now, we would generate View Impl class from the java node (including bind variables) and then expose the setter method of the bind variable to client interface. In the View layer, we would create a tree binding for the VO and the method binding for the setter method of the bind variable exposed above, in the pagedef file As we've already added an input text and an autosuggestbehavior for the test, we would not need to build the suggested items for the autosuggest list.Let us add a method in the backing bean to return us List of select items to be bound to the autosuggest list. padding: 5px; background-color: #fbfbfb; min-height: 40px; width: 544px; height: 168px; overflow: auto;"> public List onSuggest(String searchTerm) { ArrayList<SelectItem> selectItems = new ArrayList<SelectItem>(); if(searchTerm.length()>1) { //get access to the binding context and binding container at runtime BindingContext bctx = BindingContext.getCurrent(); BindingContainer bindings = bctx.getCurrentBindingsEntry(); //set the bind variable value that is used to filter the View Object //query of the suggest list. The View Object instance has a View //Criteria assigned OperationBinding setVariable = (OperationBinding) bindings.get("setBind_CountryName"); setVariable.getParamsMap().put("value", searchTerm); setVariable.execute(); //the data in the suggest list is queried by a tree binding. JUCtrlHierBinding hierBinding = (JUCtrlHierBinding) bindings.get("CountryView1"); //re-query the list based on the new bind variable values hierBinding.executeQuery(); //The rangeSet, the list of queries entries, is of type //JUCtrlValueBndingRef. List<JUCtrlValueBindingRef> displayDataList = hierBinding.getRangeSet(); for (JUCtrlValueBindingRef displayData : displayDataList){ Row rw = displayData.getRow(); //populate the SelectItem list selectItems.add(new SelectItem( (String)rw.getAttribute("Name"), (String)rw.getAttribute("Name"))); } } else{ SelectItem a = new SelectItem("","Type in two or more characters..","",true); selectItems.add(a); } return selectItems; } So, what we are doing in the above method is, to check the length of the search term and if it is more than 1 (i.e 2 or more characters), the return the actual suggest list. Otherwise, create a read only select item new SelectItem("","Type in two or more characters..","",true); and add it to the list of suggested items to be displayed. The last parameter for the SelectItem (boolean) is to make it as readOnly, so that users would not be able to select this static message from the displayed list. Finally, bind this method to the input text's autosuggestbehavior's suggestedItems property. <af:inputText label="Country" id="it1"> <af:autoSuggestBehavior suggestedItems="#{AutoSuggestBean.onSuggest}"/> </af:inputText>

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • Why are we as an industry not more technically critical of our peers? [closed]

    - by Jarrod Roberson
    For example: I still see people in 2011 writing blog posts and tutorials that promote setting the Java CLASSPATH at the OS environment level. I see people writing C and C++ tutorials dated 2009 and newer and the first lines of code are void main(). These are examples, I am not looking for specific answers to the above questions, but to why the culture of accepting sub-par knowledge in the industry is so rampant. I see people posting these same type of empirically wrong suggestions as answers on www.stackoverflow.com and they get lots of up votes and practically no down votes! The ones that get lots of down votes are usually from answering a question that wasn't asked because of lack of reading for comprehension skills, and not incorrect answers per se. Is our industry that ignorant as a whole, I can understand the internet in general being lazy, apathetic and un-informed but our industry should be more on top of things like this and way more critical of people that are promoting bad habits and out-dated techniques and information. If we are really an engineering discipline, why aren't people held to a higher standard as they are in other engineering disciplines? I want to know why people accept bad advice, poor practices as the norm and are not more critical of their peers in the software industry.?

    Read the article

  • i can't uninstall ubuntu software

    - by cunix
    root@cunix:/home/cunix# sudo apt-get remove fern-wifi-cracker Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: libqt4-test libqt4-sql-mysql mysql-common libqt4-xmlpatterns libqt4-help python-qt4 python-sip libqt4-sql-sqlite libqt4-sql macchanger libqt4-designer libmysqlclient16 python-scapy libqt4-scripttools Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: fern-wifi-cracker 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 3,514kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 167661 files and directories currently installed.) Removing fern-wifi-cracker ... dpkg (subprocess): unable to execute installed pre-removal script (/var/lib/dpkg/info/fern-wifi-cracker.prerm): Exec format error dpkg: error processing fern-wifi-cracker (--remove): subprocess installed pre-removal script returned error exit status 2 Errors were encountered while processing: fern-wifi-cracker E: Sub-process /usr/bin/dpkg returned an error code (1) how to uninstall?

    Read the article

  • Error when running debuild on package source

    - by Chris Wilson
    I'm attempting to build the squeak-vm source but am getting an error every time I do so. The output is: dpkg-buildpackage -rfakeroot -D -us -uc dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package squeak-vm dpkg-buildpackage: source version 1:4.0.3.2202-2 dpkg-buildpackage: source changed by José L. Redrejo Rodríguez <[email protected]> dpkg-source --before-build squeak-vm-4.0.3.2202 dpkg-buildpackage: host architecture i386 fakeroot debian/rules clean dh_testdir dh_testroot rm -f build-stamp configure-stamp rm -f unix/cmake/config.sub unix/cmake/config.guess /usr/bin/make -f debian/rules unpatch make[1]: Entering directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' QUILT_PATCHES=debian/patches \ quilt --quiltrc /dev/null pop -a -R || test $? = 2 Patch linex.patch does not remove cleanly (refresh it or enforce with -f) make[1]: *** [unpatch] Error 1 make[1]: Leaving directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' make: *** [clean] Error 2 dpkg-buildpackage: error: fakeroot debian/rules clean gave error exit status 2 debuild: fatal error at line 1337: dpkg-buildpackage -rfakeroot -D -us -uc failed

    Read the article

  • Columnstore Case Study #1: MSIT SONAR Aggregations

    - by aspiringgeek
    Preamble This is the first in a series of posts documenting big wins encountered using columnstore indexes in SQL Server 2012 & 2014.  Many of these can be found in this deck along with details such as internals, best practices, caveats, etc.  The purpose of sharing the case studies in this context is to provide an easy-to-consume quick-reference alternative. Why Columnstore? If we’re looking for a subset of columns from one or a few rows, given the right indexes, SQL Server can do a superlative job of providing an answer. If we’re asking a question which by design needs to hit lots of rows—DW, reporting, aggregations, grouping, scans, etc., SQL Server has never had a good mechanism—until columnstore. Columnstore indexes were introduced in SQL Server 2012. However, they're still largely unknown. Some adoption blockers existed; yet columnstore was nonetheless a game changer for many apps.  In SQL Server 2014, potential blockers have been largely removed & they're going to profoundly change the way we interact with our data.  The purpose of this series is to share the performance benefits of columnstore & documenting columnstore is a compelling reason to upgrade to SQL Server 2014. App: MSIT SONAR Aggregations At MSIT, performance & configuration data is captured by SCOM. We archive much of the data in a partitioned data warehouse table in SQL Server 2012 for reporting via an application called SONAR.  By definition, this is a primary use case for columnstore—report queries requiring aggregation over large numbers of rows.  New data is refreshed each night by an automated table partitioning mechanism—a best practices scenario for columnstore. The Win Compared to performance using classic indexing which resulted in the expected query plan selection including partition elimination vs. SQL Server 2012 nonclustered columnstore, query performance increased significantly.  Logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Other than creating the columnstore index, no special modifications or tweaks to the app or databases schema were necessary to achieve the performance improvements.  Existing nonclustered indexes were rendered superfluous & were deleted, thus mitigating maintenance challenges such as defragging as well as conserving disk capacity. Details The table provides the raw data & summarizes the performance deltas. Logical Reads (8K pages) CPU (ms) Durn (ms) Columnstore 160,323 20,360 9,786 Conventional Table & Indexes 9,053,423 549,608 193,903 ? x56 x27 x20 The charts provide additional perspective of this data.  "Conventional vs. Columnstore Metrics" document the raw data.  Note on this linear display the magnitude of the conventional index performance vs. columnstore.  The “Metrics (?)” chart expresses these values as a ratio. Summary For DW, reports, & other BI workloads, columnstore often provides significant performance enhancements relative to conventional indexing.  I have documented here, the first in a series of reports on columnstore implementations, results from an initial implementation at MSIT in which logical reads were reduced by over a factor of 50; both CPU & duration improved by factors of 20 or more.  Subsequent features in this series document performance enhancements that are even more significant. 

    Read the article

  • PECL OCI8 2.0 Production Release Announcement

    - by cj
    The PHP OCI8 2.0.6 extension for Oracle Database is now "production" status. The source code is available on PECL. This can be used immediately to update your OCI8 extension in PHP 5.2 and later versions. The extension compiles with Oracle 10.2 or later client libraries. Oracle's standard cross-version database connectivity applies. OCI8 2.0 and PHP 5.5.5 RPMs for Oracle and Red Hat Linux are available from oss.oracle.com. Windows DLLs are available on PECL for PHP 5.3, PHP 5.4 and PHP 5.5. OCI8 2.0 source code will also be automatically included in the next major version of PHP. New Functionality Oracle Database 12c Implicit Result Set support. IRS's make it easy to pass query results back from stored PL/SQL procedures or anonymous PL/SQL blocks. Individual IRS statement resources, each corresponding to a single query, can be obtained with the new function oci_get_implicit_resultset(). These 'child' statement resources can be passed to any oci_fetch_* function. See Using PHP and Oracle Database 12c Implicit Result Sets and the PHP Manual: oci_get_implicit_resultset(). DTrace Dynamic Trace static probes. This well respected DTrace tracing framework is available on a number of platforms, including Oracle Linux. PHP OCI8 static user-space probes can be enabled with PHP's --enable-dtrace configuration option. See Using PHP DTrace on Oracle Linux. Documentation is also available in the PHP Manual OCI8 and DTrace Dynamic Tracing Improved Functionality Using oci_execute($s, OCI_NO_AUTO_COMMIT) for a SELECT no longer unnecessarily initiates an internal ROLLBACK during connection close. This can improve overall scalability by reducing "round trips" between PHP and the database. Changed Functionality PHP OCI8 2.0's minimum pre-requisites are now PHP 5.2 and Oracle client library 10.2. Later versions of both are usable and, in fact, recommended. Use the older PHP OCI8 1.4.10 extension when using PHP 4.3.9 through to PHP 5.1.x, or when only Oracle Database 9.2 client libraries are available. oci_set_*($connection, ...) meta data setting call error handling is fixed so that oci_error($connection) works for these calls. Note: The old, deprecated function aliases like ocilogon still exist but are not recommended for new applications. Phpinfo() Changes Some cosmetic changes were made to the output of php --ri oci8 and the phpinfo() function. The oci8.event and oci8.connection_class values are now shown only when the Oracle client libraries support the respective functionality. Connection statistics are now in a separate phpinfo() table. Temporary LOB and Collection support status lines in phpinfo() output were removed. These two features have always been enabled since 2007. Oci_internal_debug() Changes The oci_internal_debug() function is now a no-op. Use PHP's --enable-dtrace functionality with DTrace or SystemTap instead. References OCI8 Extension source code and Windows DLLs http://pecl.php.net/package/oci8 Oracle Linux RPMs oss.oracle.com PHP Manual for OCI8 OCI8 and DTrace Dynamic Tracing Oracle OpenWorld Conference paper What's New in Oracle Database 12c for PHP

    Read the article

  • Can't update kernel to 2.6.35.27

    - by Uri Herrera
    When I try to update I get this message, I'm guessing I'm missing something here? Filesystem Type Size Used Avail Use% Mounted on /dev/sdb6 ext4 43G 7.7G 33G 20% / none devtmpfs 1.6G 349k 1.6G 1% /dev none tmpfs 1.6G 5.9M 1.6G 1% /dev/shm none tmpfs 1.6G 218k 1.6G 1% /var/run none tmpfs 1.6G 0 1.6G 0% /var/lock /dev/sdb2 fuseblk 258G 198G 60G 77% /media/Backup /dev/sda1 fuseblk 321G 175G 146G 55% /media/Media /dev/sdb1 ext4 96M 84M 6.7M 93% /boot /dev/sdb7 ext4 175G 81G 86G 49% /home Here's the output: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: linux-image-2.6.35-22-generic 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 5 not fully installed or removed. After this operation, 107MB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 282211 files and directories currently installed.) Removing linux-image-2.6.35-22-generic ... Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 2.6.35-22-generic /boot/vmlinuz-2.6.35-22-generic /etc/default/grub: 23: Syntax error: newline unexpected run-parts: /etc/kernel/postrm.d/zz-update-grub exited with return code 2 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/linux-image-2.6.35-22- generic.postrm line 328. dpkg: error processing linux-image-2.6.35-22-generic (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: linux-image-2.6.35-22-generic E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Use Those Schemas, People!

    - by BuckWoody
    Database Schemas are just containers – they aren’t users or anything else – think of a sub-directory on the hard drive. In early versions of SQL Server we “hid” schemas, placing all objects under “dbo”, which gave the erroneous perception that Schemas are users. In SQL Server 2005, we “un-hid” or re-introduced schemas within the database. Users can have a default schema (a place where their new objects go), you can add new schemas and transfer objects between them, and they have many other benefits. But I still see a lot of applications, developed by shops I know as well as vendors, that don’t make use of a Schema. Everything is piled under dbo. I completely understand this – since permissions can be granted to a schema, they feel a lot like a user, so it’s just easier not to worry about both users and schemas when you create a database. But if you’ll use them properly you can make your application more understandable and portable. You should at least take a few minutes and read more about them – you owe it to your users: http://msdn.microsoft.com/en-us/library/ms190387.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Software Center not Opening at all Error

    - by Newbie
    When I open software from menu, it says "cannot open software database. Please reinstall the software-center package. When I write software-center on terminal, such error comes: 2014-05-28 09:11:20,584 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2014-05-28 09:11:20,593 - softwarecenter.ui.gtk3.app - ERROR - xapian open failed Traceback (most recent call last): File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 302, in __init__ if self.db.schema_version() != DB_SCHEMA_VERSION: File "/usr/share/software-center/softwarecenter/db/database.py", line 289, in schema_version return self.xapiandb.get_metadata("db-schema-version") File "/usr/share/software-center/softwarecenter/db/database.py", line 177, in xapiandb self._db_per_thread[thread_name] = self._get_new_xapiandb() File "/usr/share/software-center/softwarecenter/db/database.py", line 190, in _get_new_xapiandb xapiandb = xapian.Database(self._db_pathname) File "/usr/lib/python2.7/dist-packages/xapian/__init__.py", line 3667, in __init__ _xapian.Database_swiginit(self,_xapian.new_Database(*args)) DatabaseCorruptError: /var/cache/software-center/xapian/iamchert: Chert version file should be 28 bytes, actually 0 Now, when I write command sudo apt-get remove software-center dpkg: error: corrupt info database format file '/var/lib/dpkg/info/format' E: Sub-process /usr/bin/dpkg returned an error code (2) I had ubuntu before but it kind of got corrupted. Now, I have freshly reinstalled it and even at start, software center is not opening and this error comes. I hope you have a solution. Thanks.

    Read the article

  • Keeping an Eye on Your Storage

    - by Fatherjack
    There are plenty of resources that advise you about looking for signs that your storage hardware is having problems. SQL Server Alerts for 823, 824 and 825 are covered here by Paul Randall of SQL Skills: http://www.sqlskills.com/blogs/paul/a-little-known-sign-of-impending-doom-error-825/ and here by me: https://www.simple-talk.com/blogs/2011/06/27/alerts-are-good-arent-they/. Now until very recently I wasn’t aware that there was a different way to track the 823 + 824 errors. It was by complete chance that I happened to be searching about in the msdb database when I found the suspect_pages table. Running a query against it I got zero rows. This, as it turns out is a good thing. Highlighting the table name and pressing F1 got me nowhere – Is it just me or does Books Online fail to load properly for no obvious reason sometimes? So I typed the table name into the search bar and got my local version of http://msdn.microsoft.com/en-us/library/ms174425.aspx. From that we get the following description: Contains one row per page that failed with a minor 823 error or an 824 error. Pages are listed in this table because they are suspected of being bad, but they might actually be fine. When a suspect page is repaired, its status is updated in the event_type column. So, in the table we would, on healthy hardware, expect to see zero rows but on disks that are having problems the event_type column would show us what is going on. Where there are suspect pages on the disk the rows would have an event_type value of 1, 2 or 3, where those suspect pages have been restored, repaired or deallocated by DBCC then the value would be 4, 5 or 7. Having this table means that we can set up SQL Monitor to check the status of our hardware as we can create a custom metric based on the query below: USE [msdb] go SELECT COUNT(*) FROM [dbo].[suspect_pages] AS sp All we need to do is set the metric to collect this value and set an alert to email when the value is not 1 and we are then able to let SQL Monitor take care of our storage. Note that the suspect_pages table does not have any updates concerning Error 825 which the links at the top of the page cover in more detail. I would suggest that you set SQL Monitor to alert on the suspect_pages table in addition to other taking other measures to look after your storage hardware and not have it as your only precaution. Microsoft actually pass ownership and administration of the suspect_pages table over to the database administrator (Manage the suspect_pages Table (SQL Server)) and in a surprising move (to me at least) advise DBAs to actively update and archive data in it. The table will only ever contain a maximum of 1000 rows and once full, new rows will not be added. Keeping an eye on this table is pretty important, although In my opinion, if you get to 1000 rows in this table and are not already waiting for new disks to be added to your server you are doing something wrong but if you have 1000 rows in there then you need to move data out quickly because you may be missing some important events on your server.

    Read the article

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • how to remove unmet dependencies created by vlc player in ubuntu 12.04 LTS?

    - by Anti
    Output on trying to remove vlc with sudo apt-get remove vlc: niranjan@niranjan-OEM:~$ sudo apt-get remove vlc Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libvlccore5 : Depends: vlc-data (= 2.0.8-0ubuntu0.12.04.1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). Trying sudo apt-get -f install niranjan@niranjan-OEM:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: vlc-data The following NEW packages will be installed: vlc-data 0 upgraded, 1 newly installed, 0 to remove and 452 not upgraded. 8 not fully installed or removed. Need to get 0 B/10.3 MB of archives. After this operation, 30.4 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 95% dpkg: unrecoverable fatal error, aborting: files list file for package 'libavutil51' is missing final newline E: Sub-process /usr/bin/dpkg returned an error code (2)

    Read the article

  • How to subtract 1 from a orginal count in an ASP.NET gridview

    - by SAMIR BHOGAYTA
    I have a gridview that contains a count (whic is Quantity) were i have a button that adds a row under the orginal row and i need the sub row's count (Quantity) to subtract one from the orgianl row Quantity. EX: Before button click Orgianl row = 3 After click Orginal row = 2 Subrow = 1 Code: ASP.NET // FUNCTION : Adds a new subrow protected void gvParent_RowCommand(object sender, GridViewCommandEventArgs e) { if (e.CommandName.Equals("btn_AddRow", StringComparison.OrdinalIgnoreCase)) { // Get the row that was clicked (index 0. Meaning that 0 is 1, 1 is 2 and so on) // Objects can be null, Int32s cannot not. // Int16 = 2 bytes long (short) // Int32 = 4 bytes long (int) // Int64 = 8 bytes long (long) int i = Convert.ToInt32(e.CommandArgument); // create a DataTable based off the view state DataTable dataTable = (DataTable)ViewState["gvParent"]; for (int part = 0; part 1) { dataTable.Rows[part]["Quantity"] = oldQuantitySubtract - 1; // Instert a new row at a specific index DataRow dtAdd = dataTable.NewRow(); for (int k = 0; k dtAdd[k] = dataTable.Rows[part][k]; dataTable.Rows.InsertAt(dtAdd, i + 1); break; //dataTable.Rows.Add(dtAdd); } } // Rebind the data gvParent.DataSource = dataTable; gvParent.DataBind(); } }

    Read the article

  • Karmetasploit (aircrack-ng) Not consistantly Broadcasting AP ssid

    - by Sparky
    I cannot seem to get karmetasploit to broadcast my AP. Actually, taking it back a few steps I cannot get airbase-ng (v.r2154) to broadcast an SSID. I have seen it broadcast a few intermittent times (not many at all), but most of the time it doesn't show up at all. When it showed up the last time it came up as ad-hoc also. simplest comand I have tried: sudo airbase-ng -e "Wifi-test" -c 11 -v mon0 (I have tried with/without -c and -P -C 30) It appears to work just fine on the attacking machine, but nothing gets broadcasted. I have tried viewing from (3) different computers (winXP, Win7, ubuntu 12.04) Additionally, I am running Ubuntu 12.04 I have tried (3) different wireless cards Internal Card: Intel 4965 External USB: Ubiquiti Atheros carl9170 external SUB: ALFA AWUS036H Realtek RTL8187L I have tried putting each in/out of monitor mode (airmon-ng start monX) I have also tested to see if injection is working: sudo aireplay-ng -9 mon0 sudo aireplay-ng -9 mon0 22:37:54 Trying broadcast probe requests... 22:37:55 Injection is working! 22:37:56 Found 4 APs ... ... Has anyone experienced this issue and have advice/solution? I the aircrack-ng forum site has been down for some time, so I cannot get advice from that site. Thanks, Sparky

    Read the article

  • Broad topics needed for teaching game development

    - by livingtech
    I am going to be doing a presentation on game development to an iPhone user group in the near(ish) future. My audience are iPhone developers, but not necessarily very experienced ones, and this is meant to be an introduction. My question is, what broad topics are needed to understand game development? I acknowledge that this is fairly subjective, but I really am hoping for a comprehensive list of high-level topics that apply to a broad enough swath of games that anyone interested in the topic SHOULD know about them. I would be ecstatic with some pointers to any resources that attempt to make a list such as this this. (I have looked, but my google-fu is failing me tonight.) Here's what I have so far: The Game Loop a sub-note about event driven games 2D Animation sprites/texture maps 3D Animation importance of frameworks modeling software Particles and particle effects hit detection AI Obviously I will not be covering all these topics with any depth, more like simply defining them so that after my talk, the audience will (hopefully) be able to wrap their heads around how any given game might be developed. What am I missing?

    Read the article

  • Object oriented EDI handling in PHP

    - by Robert van der Linde
    I'm currently starting a new sub project where I will: Retrieve the order information from our mainframe Save the order information to our web-apps' database Send the order as EDI (either D01b or D93a) Receive the order response, despatch advice and invoice messages Do all kinds of fun things with the resulting datasets. However I am struggling with my initial class designs. The order information will be retrieved from the mainframe which will result in a "AOrder" class, this isn't a problem, I am not sure about how to mold this local object into an EDI string. Should I create EDIOrder/EDIOrderResponse/etc classes with matching decorators (EDIOrderD01BDecorator, EDIOrderD93ADecorator)? Do I need builder objects or can I do: // $myOrder is instance of AOrder $myOrder->toEDIOrder(); $decorator = new EDIOrderD01BDecorator($myOrder); $edi = $decorator->getEDIString(); And it'll have to work the other way around as well. Is the following code a good way of handling this problem or should I go about this differently? $ediString = $myEDIMessageBroker->fetch(); $ediOrderResponse = EDIOrderResponse::fromString($ediString); I'm just not so sure about how I should go about designing the classes and interactions between them. Thanks for reading and helping.

    Read the article

< Previous Page | 539 540 541 542 543 544 545 546 547 548 549 550  | Next Page >