Search Results

Search found 1124 results on 45 pages for 'indexing'.

Page 31/45 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Why doesn't the People Pane in Outlook 2010 show appointments for individual contacts?

    - by Curtis
    Outlook 2010 will not show appointments in the people pane. Under the activities tab if I go to All Items then eventually the appointments will show, but this takes a seriously long time. If I then click off the tab to look at another field then return to All Items, all the appointments are gone again. I need to be able to: Open a contact and see when that contact has appointments Open an appointment and see which contacts are attached to that appointment It works well from the appointment card to the contact, but has me completely frustrated going from the contact card to find the appontments. I have tried many things but cannot solve this problem. My set up is as follows: Exchange Server Windows 7 Ultimate Indexing enabled Cached exchange mode enabled Help! This is the whole reason I installed Outlook 2010.

    Read the article

  • Killer content for my Kindle - The Economist with no need for an iPad - yipeee!

    - by Liam Westley
    I admin it, I was jealous of someone's iPad. They were reading The Economist, for free, as they were a print subscriber. I'm a print subscriber too. However, I don't have an iPad or an iPhone, just an Android phone and a Kindle. As soon as I got the Kindle, I looked up how to get The Economist on it. £9.99 per month. Hmmm, twice as much again as a my print subscription and I wanted to maintain the print subscription. No way Amazon. Fortunately some nice person wrote similar comments on The Economist subscription for Kindle, but added a very important additional nugget of information; and there is no need, as a print subscriber you can just use the free Calibre e-book creation tool anyway. So I downloaded it, searched for The Economist online 'recipe', entered my login name and password (part of my print subscription) and off went Calibre to screen scrape every single article from the Christmas 2010 issue into a .mobi file, complete with front cover image and full indexing. It's wonderful. Truely wonderful. Every section individually indexed, with each article separated and all inline images preserved. It even feels wonderfully retro, back to the days when The Economist only used black and white images. So many thanks the guys behind Calibre and The Economist recipe creators. Finally, I have my essential Kindle content that I've been waiting for.

    Read the article

  • LobsterPot Solutions in the USA

    - by Rob Farley
    We’re expanding! I’m thrilled to announce that Microsoft Gold Partner LobsterPot Solutions has started another branch appointing the amazing Ted Krueger (5-time SQL MVP awardee) as the US lead. Ted is well-known in the SQL Server world, having written books on indexing, consulting and on being a DBA (not to mention contributing chapters to both MVP Deep Dives books). He is an expert on replication and high availability, and strong in the Business Intelligence space – vast experience which is both broad and deep. Ted is based in the south east corner of Wisconsin, just north of Chicago. He has been a consultant for eons and has helped many clients with their projects and problems, taking the role as both technical lead and consulting lead. He is also tireless in supporting and developing the SQL Server community, presenting at conferences across America, and helping people through his blog, Twitter and more. Despite all this – it’s neither his technical excellence with SQL Server nor his consulting skill that made me want him to lead LobsterPot’s US venture. I wanted Ted because of his values. In the time I’ve known Ted, I’ve found his integrity to be excellent, and found him to be morally beyond reproach. This is the biggest priority I have when finding people to represent the LobsterPot brand. I have no qualms in recommending Ted’s character or work ethic. It’s not just my thoughts on him – all my trusted friends that know Ted agree about this. So last week, LobsterPot Solutions LLC was formed in the United States, and in a couple of weeks, we will be open for business! LobsterPot Solutions can be contacted via email at [email protected], on the web at either www.lobsterpot.com.au or www.lobsterpotsolutions.com, and on Twitter as @lobsterpot_au and @lobsterpot_us. Ted Kruger blogs at LessThanDot, and can also be found on Twitter and LinkedIn. This post is cross-posted from http://lobsterpotsolutions.com/lobsterpot-solutions-in-the-usa

    Read the article

  • quick folder access on Linux (akin to Launchy)

    - by Eli Bendersky
    Launchy is a great piece of software, I use it on Windows mainly for quickly accessing folders. I love its auto-indexing in the background, and hardly ever browse through folders manually these days, solves me lots of time. On Linux (Ubuntu 9.10), I usually "live" in the terminal, however. Therefore, Launchy on Linux (or Gnome Do, or its other replacements) are not what I need - as it opens the file manager, and I don't need the file manager. What I do need is something that indexes my folders and lets me cd into them quickly in the terminal. For example: mycd python_c Will cd to: ~/dev/scripts/python_code I hope my intention is understood :-) Are you familiar with such tools?

    Read the article

  • Mod Rewrite - directing HTTP/HTTPS traffic to the appropriate virtual hosts

    - by kce
    I have an Apache2 web server (v. 2.2.16) running on Debian hosting three virtual hosts. The first two hosts are HTTP only (server1 and server2). The last host is HTTPS only (server3). My virtual host configuration files can be found at pastebin. I would like to use mod rewrite to get the following behavior: Any request for http://server3 is re-directed to https://server3 Any request for either https://server1 or https://server2 is re-directed to http://server1 or http://server2 as appropriate. Currently, requesting http://server3 gives you a 403 because indexing is disabled for that host and a request for https://server1 or https://server2 will resolve as https://server3 (as its the only virtual host running SSL). This behavior is not desirable. So far I have added a rewrite rule to the central configuration file (myServerWideConfs.conf), with unfortunately no effect. I was under the impression that this rule (or something similar) should rewrite all https:// requests for server1 and server2 to the proper http:// request. RewriteEngine On RewriteCond %{HTTP_HOST} !^server3 [NC] RewriteRule (.*) http://%{HTTP_HOST} My question is two-fold: What mod rewrite rules should I use to accomplish this? And where should they go? Debian's packaging of Apache has a pretty granular (i.e., fractured) configuration file layout; should my rewrite rules go in /etc/apache2/apache2.conf, /etc/apache2/conf.d/myServerWideConfs.conf, or the individual virtual host files? Is mod rewrite the right tool to accomplish this or am I missing something in my greater apache configuration?

    Read the article

  • Solr Autosuggest

    - by rahul
    Hi, I am using Solr (1.4) AutoSuggest feature using termsComponent. Currently, if I type 'goo' means, Solr suggest words like 'google'. But I would like to receive suggestions like 'google, google alerts, ..' . ie, suggestions with single and multiple terms. Not sure, whether I need to use edgengrams for that. for eg, indexing google like 'go', 'oo', 'og', ... . But I think I don't need this, Since I don't want partial search. Please let me know if there is any way to do multiple word suggestions . Thanks in Advance.

    Read the article

  • Huge google impression drop after cleaning html

    - by olgatorresfoundation
    Good morning, I am the webmaster of a non-profit organization that donates grants to colorectal cancer research projects and funds various colorectal cancer information campaigns. We have three domains: www.fundacioolgatorres dot org (Catalan) www.fundacionolgatorres dot org (Spanish) www.olgatorresfoundation dot org (English) So what happened? I redesigned olgatorresfoundation on the 20th and the fundacionolgatorres on the 30th of May. In both cases, exactly two days later, the number of impressions on both dropped to a halt. Granted, we did not have the traffic of Microsoft, but a 90% decrease a disaster of incredible proportions for us. My only real changes were cleaning up the old ineffective HTML to a cleaner form (mostly moving away from redundant table construction to a table-less view). Here is a before and after snapshot of what the change looks like: Before: http://www.fundacioolgatorres.org/aparell_digestiu/introduccio/ (unchanged page in Catalan) After: http://www.olgatorresfoundation.org/digestive_system/introduction/ (changed page in English) Anybody has a clue to what just happened? Why should a normal, sane html improvement be punished and so dramatically? No URLs have been changed, neither have page names or descriptions. Possible secondary question: If it is so that Google sees it as a major overhaul and decides to drop the pagerank sharply, does it come back to pre-change levels if the content "checks out" or will the page start over from scratch earning those pagerank points (which would mean that we would have to wait 6 months for the pages to recover to the level they had two weeks ago)? (duplicated from productforums.google dot com/forum/#!category-topic/webmasters/crawling-indexing--ranking/YsnyX0JzOpY, hoping to reach a wider audience)

    Read the article

  • Implications and benefits of removing NT AUTHORITY\SYSTEM from sysadmin role?

    - by Cade Roux
    Disclaimer: I am not a DBA. I am a database developer. A DBA just sent a report to our data stewards and is planning to remove the NT AUTHORITY\SYSTEM account from the sysadmin role on a bunch of servers. (The probably violate some audit report they received). I see a MSKB article that says not to do this. From what I can tell reading a variety of disparate information on the web, a bunch of special services/operations (Volume Copy, Full Text Indexing, MOM, Windows Update) use this account even when the SQL Server and Agent service etc are all running under dedicated accounts.

    Read the article

  • How to get search engines to properly index an ajax driven search page

    - by Redtopia
    I have an ajax-driven search page that will allow users to search through a large collection of records. Each search result points to index.php?id=xyz (where xyz is the id of the record). The initial view does not have any records listed, and there is no interface that allows you to browse through all records. You can only conduct a search. How do I build the page so that spiders can crawl each record? Or is there another way (outside of this specific search page) that will allow me to point spiders to a list of all records. FYI, the collection is rather large, so dumping links to every record in a single request is not a workable solution. Outputting the records must be done in multiple requests. Each record can be viewed via a single page (eg "record.php?id=xyz"). I would like all the records indexed without anything indexed from the sitemap that shows where the records exist, for example: <a href="/result.php?id=record1">Record 1</a> <a href="/result.php?id=record2">Record 2</a> <a href="/result.php?id=record3">Record 3</a> <a href="/seo.php?page=2">next</a> Assuming this is the correct approach, I have these questions: How would the search engines find the crawl page? Is it possible to prevent the search engines from indexing the words "Record 1", etc. and "next"? Can I output only the links? Or maybe something like:  

    Read the article

  • Need to sanity-check my .htaccess, especially Limit GET POST line for Google repellent

    - by jose
    I need a sanity check on this .htaccess (from a WordPress site) I inherited from a 5 month+ old site. What's the symptom? Google + Bing crawl, but don't index any of the pages. Let me be clear: I'm not mad about "not ranking high." I think something is (accidentally) rejecting search engine indexing. I am not an expert on .htaccess, but one part especially looked funny, the Limit GET POST line. Is it not weird to have both Allow and Deny all, with no parameters? Also, I've ruled out robots.txt, but if I were you I'd want to see it, so here it is: User-agent: * Crawl-delay: 30 And here's the more suspect .htaccess: # temp redirect wordpress content feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymousblog [R=302,NC,L] </IfModule> # temp redirect wordpress comment feeds to feedburner <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_USER_AGENT} !FeedBurner [NC] RewriteCond %{HTTP_USER_AGENT} !FeedValidator [NC] RewriteRule ^comments/feed/?([_0-9a-z-]+)?/?$ http://feeds.feedburner.com/anonymous_comments [R=302,NC,L] </IfModule> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti* <Limit GET POST> order deny,allow deny from all allow from all </Limit> <Limit PUT DELETE> order deny,allow deny from all </Limit> php_value memory_limit 32M Adding header by request: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <meta name="robots" content="noindex,nofollow" /> <meta name="description" content="buncha junk i've deleted." /> <meta name="keywords" content="keywords i've deleted" /> <meta name="viewport" content="width=device-width" />

    Read the article

  • SQL SERVER – 2014 Announced and SQL Server 2014 Datasheet

    - by Pinal Dave
    Earlier this week Microsoft has announced SQL Server 2014. The release date of Trial of SQL Server to be believed later this year. Here are few of the improvements which I was looking forward are there in this release. Please note this is not the exhaustive list of the features of SQL Server 2014. This is the a quick note on the features which I was looking forward and now will be available in this release. Always On supports now 8 secondaries instead of 4 Online Indexing at partition level – this is a good thing as now index rebuilding can be done at a partition level Statistics at the partition level – this will be a huge improvement in performance In-Memory OLTP works by providing in-application memory storage for the most often used tables in SQL Server. (Read More Here) Columnstore Index can be updated – I just can’t wait for this feature (Columnstore Index) Resource Governor can control IO along with CPU and Memory Increase performance by extending SQL Server in-memory buffer pool to SSDs Backup to Azure Storage You can additionally download SQL Server 2014 Datasheet from here. Which is your favorite new feature of SQL Server 2014? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Outlook Search Folder - Unread Mail in shared mailbox

    - by Garrett
    I have a user who is trying to configure the Unread Mail search folder for a shared mailbox in Outlook 2007. I believe last time we accomplished this by doing an advanced find, and saving the search. However, on this computer I can't search more than one folder of the shared mailbox at a time. Everything I have read online says this isn't possible, but we have one user who has it set up and working perfectly. There's no additional software or indexing, not even Windows Desktop Search 4.0 updates installed.

    Read the article

  • 256 SSD / 2TB Internal drive, i7, 16GB RAM.. explorer.exe still slow

    - by web_dvlp_sd
    OK, so I just got done tweaking my brand new PC and explorer.exe still manages to be sluggish not as snappy as I'd like or expected at all. Specs are i7 4th gen processor, 16GB RAM, windows 7 64 home premium on 256GB SSD, secondary internal 2TB drive. I assumed system would be lightning fast, been doing a bunch of researching and switched out a lot of different settings including enabling/disabling indexing, turning pagefile on/off or placing it on separate drive, turning folder optimization options on/off.. changes have been minimal or none at all. Any further advice or anything I might be missing, besides a new/repair install? PC is literally 3 days old I don't feel like I need a new install at all

    Read the article

  • How to prevent Mac OS X creating .DS_Store files on non Mac (HFS) Volumes?

    - by sudo petruza
    Is there a way to prevent Mac OS X creating .DS_Store and other hidden meta-files on foreign volumes like NTFS and FAT? I share an NTFS partition with data like Thunderird & Firefox's profiles and apache's DocumentRoot, between Mac OS X and Windows, which is very handy. I don't mind if Mac OS X is not capable of indexing or otherwise doing the neat things those metafiles are for. Note: It's not shared over a network, both operating systems and the shared partition coexist on the same disk, on the same machine.

    Read the article

  • asp.net mvc vs angular.js model binding

    - by aw04
    So I've noticed a trend lately of .net web developers using angular.js on the client side of applications and I've become more curious as I play around with angular and compare it to how I would do things in asp.net mvc. I'll give a quick example of what really got me thinking. I recently came across a situation at work (I work in a .net environment) where I needed to create a table bound to a collection of objects that had the ability to add and remove rows/items from the collection. I had an add button that created a new object and appended a row to the end of the table, and a remove button in each row to remove a particular object/row. Using asp.net mvc, I first found myself making an ajax call to the server for each operation, updating the server side model, and refreshing part of the page to show the result in the table. This worked but I didn't really like the idea of calling the server to update the model each time, so I tried to come up with a solution to do this on the client side. It turned out to be quite a task, as I had to generate the html on add with validation and all and the correct indexing for the model binding to work. It got worse on remove, as I ended up with a crazy string replace function to recreate the indexes on each item to satisfy the binding requirements (if an item other than the last is removed, the indexes are no longer correct). Now out of curiosity, I tried to recreate this at home in angular (which I had no experience with) and it took me all of about 10 minutes with simple functions to add and remove items from the client side model. This is just one example, but it seems to me that I'm able to achieve the same results with far fewer calls to the server in angular because of the fact that it binds to a client side model. So my question is, is this a distinct advantage of using a javascript mvc framework or am I somehow under utilizing the power of asp.net mvc and am I right in thinking that these operations should be done on the client and have no business requiring calls to the server?

    Read the article

  • Sitemap structure for network of subdomains

    - by HaCos
    I am working on a project that's a network of 2 domains (domain.gr & domain.com.cy) of subdomains similar to Hubpages [each user gets a profile under a different subdomain & there is a personal blog for that user as well] and I think there is something wrong with our sitemap submission. Sometimes it takes weeks in order a new profiles to get indexed. We make use of one Webmasters account in order to manage all network and we don't want to create different accounts for each subdomain since there are more than 1000 already. According to this post http://goo.gl/KjMCjN, I end up on a structure of 5 sitemaps with the following structure : 1st sitemap will be for indexing the others. 2nd sitemap for all users profile under the domain.gr 3nd sitemap for all users profile under the domain.com.cy 4th sitemap for all posts under the *.domain.gr - news sitemap http://goo.gl/8A8S9f 5th sitemap for all posts under the *.domain.com.cy - news sitemap again Now my questions: Should we create news sitemaps or just list all post in 2nd & 3rd sitemap? Does links ordering has anything to do? Eg: Most recent user created be first in sitemap or doesn't make any different and we just need to make sure that lastmod date is correct? Does anyone guess how Hubpages submit their sitemap in Webmasters so maybe we could follow there way? Any alternative or better way to index this kind of schema? PS: Our network is multi language - Greek & English are available. We make use of hreflang tags on the head of each page to separate country target of each version.

    Read the article

  • updatedb & locate command problem - Files from external hard drive are no longer indexed after rebooting

    - by user784637
    Files from my external hard drive are no longer indexed after rebooting. I have to remount and then run # updatedb after each reboot. The problem is updatedb takes a few minutes for my external hard drives. Is there any way I can retain indexing for my externals after I reboot so that the locate command can search through my externals? EDIT: Per Request here are my specs: $ cat /etc/updatedb.conf PRUNE_BIND_MOUNTS="yes" # PRUNENAMES=".git .bzr .hg .svn" PRUNEPATHS="/tmp /var/spool /media" PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf fuse.glusterfs fuse.sshfs ecryptfs fusesmb devtmpfs" # mount /dev/sda5 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/me/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=me) /dev/sdb1 on /media/me type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) /dev/sdd1 on /media/Little Boy type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) /dev/sde1 on /media/Fat Man type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) # on_ac_power; echo $? 255

    Read the article

  • OSX Time Machine: deletion of backup folders

    - by jml
    I saw this question and was hoping that someone could expand upon the chosen answer (which I understood): Can you sudo mv Time Machine backup files as sudo from the trash to their original locations? I have tried doing this as root to no avail (operation not permitted). If not, can you successfully rm them via the trash via the terminal, faster than what the endless 'preparing to empty the trash' dialog suggests, and If you get the files back out of the trash can you tell if they are intact via disk utility (and how) Can you force indexing on a Time Machine drive in the same way that you would a normal drive to rebuild the TM index? I realize that a single answer could clarify all of the above, but I wanted to include details to be clear on what I am asking. Thanks for any help.

    Read the article

  • Windows Server 2008 R2 and OSX 10.5/.6/.7, Search, Spotlight

    - by Keith Loughnane
    I'm handling a migration from a on old mac server to a Windows Server 2008 R2 machine running a 12TB(10 usable) RAID5 server. It's using an SMB share and now the OSX 10.5/.6 users can search sometimes it works but takes up to 10 minutes. The OSX 10.7 machine seems to be fine. I've looked in the root of the shared drive for a .Spotlight-V100 file (ls -a) but it doesn't seem to be there. mdutil says indexing is on for that volume and I have cleared the index using mdutil -E /Volumes/MeSharedVolume numerous times. Any ideas?

    Read the article

  • Strange mesh import problem with Assimp and OpenGL

    - by Morgan
    Using the assimp library for importing 3D data into an OpenGL application. I get some strange problems regarding indexing of the vertices: If I use the following code for importing vertex indices: for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } I get the following result: Instead, if I use the following code: for(int k = 0; k < 2 ; k++) { for (unsigned int t = 0; t < mesh->mNumFaces; ++t) { const struct aiFace * face = &mesh->mFaces[t]; if (face->mNumIndices == 3) { indices->push_back(face->mIndices[0]); indices->push_back(face->mIndices[1]); indices->push_back(face->mIndices[2]); } } } I get the correct result: Hence adding the indices twice, renders the correct result? The OpenGL buffer is populated, like so: glBufferData(GL_ELEMENT_ARRAY_BUFFER, indices->size() * sizeof(unsigned int), indices->data(), GL_STATIC_DRAW); And rendered as follows: glDrawElements(GL_TRIANGLES, vertexCount*3, GL_UNSIGNED_INT, indices->data());

    Read the article

  • Logstash shipper & server on the samebox

    - by keftes
    I'm trying to setup a central logstash configuration. However I would like to be sending my logs through syslog-ng and not third party shippers. This means that my logstash server is accepting via syslog-ng all the logs from the agents. I then need to install a logstash process that will be reading from /var/log/syslog-clients/* and grabbing all the log files that are sent to the central log server. These logs will then be sent to redis on the same VM. In theory I need to also configure a second logstash process that will read from redis and start indexing the logs and send them to elasticsearch. My question: Do I have to use two different logstash processes (shipper & server) even if I am in the same box (I want one log server instance)? Is there any way to just have one logstash configuration and have the process read from syslog-ng --- write to redis and also read from redis --- output to elastic search ? Diagram of my setup: [client]-------syslog-ng--- [log server] ---syslog-ng <----logstash-shipper --- redis <----logstash-server ---- elastic-search <--- kibana

    Read the article

  • ArchBeat Link-o-Rama Top 20 for June 10-16, 2012

    - by Bob Rhubart
    The top 20 most popular items shared via my social networks for the week of June 10-16, 2012. DevOps: Evolving to Handle Disruption | JP Morgenthal The Healthy Tension That Mobility Creates | Hernan Capdevila If you aren't among those finding bugs you might be among those complaining about them later | Markus Eisele ODTUG Kscope12 - June 24-28 - San Antonio, TX It's Alive! - The Oracle OpenWorld Content Catalog URGENT BULLETIN: Disable JRE Auto-Update for All E-Business Suite End-Users Aetna Dumps Its Siloed Enterprise Architecture for SOA | Stephanie Overby Condos and Clouds: Thinking about Cloud Computng by Looking at Condominiums | Pat Helland 5 minutes or less: Indexing Attributes in OID | Andre Correa Whole Lotta Virtualization Goin' On | Rick Ramsey The Road to a Cloud-Enabled, Infinitely Elastic Application Infrastructure | Andy Butler, Massimo Pezzini Migrating C/C++ embedded SQL code | Tom Laszewski Catching Up to Mobile Computing | Bob Rhubart Duke's Choice Award Nominations Close Friday! | Tori Wieldt Eclipse DemoCamp - June 2012 - Redwood Shores, CA BI Architecture Master Class for Partners - Oracle Architecture Unplugged ADF Tutorial Chapter 1: Introduction | Yannick Ongena OPN: Fusion Middleware Summer Camps in July in Lisbon and Munich Networking in VirtualBox | The Fat Bloke 2012 Oracle Fusion Middleware Innovation Awards - Win a FREE Pass to Oracle OpenWorld 2012 in San Francisco Thought for the Day "If the mind really is the finest computer, then there are a lot of people out there who need to be rebooted." — Tim Bryce Source: SoftwareQuotes.com

    Read the article

  • How to create single integer index value based on two integers where first is unlimited?

    - by Jan Doggen
    I have table data containing an integer value X ranging from 1.... unknown, and an integer value Y ranging from 1..9 The data need to be presented in order 'X then Y'. For one visual component I can set multiple index names: X;Y But for another component I need a one-dimensional integer value as index (sort order). If X were limited to an upper bound of say 100, the one-dimensional value could simply be X*100 + Y. If the one-dimensional value could have been a real, it could be X + Y/10. But if I want to keep X unlimited, is there a way to calculate a single integer 'indexing' value from X and Y? [Added] Background information: I have a Gantt/TreeList component where the tasks are ordered on a TaskIndex integer. This does not need to be a real database field, I can make it a calculated field in the underlying client dataset. My table data is e.g. as follows: ID Baseline ParentID 1 0 0 (task) 5 2 1 (baseline) 8 1 1 (baseline) 9 0 0 (task) 12 0 0 (task) 16 1 12 (baseline) Task 1 has two baselines numbered 1 and 2 (IDs 8 and 5) Task 9 has no baselines Task 12 has one baseline numbered 1 (ID 16) Baselines number 1-9 (the Y variable from my question); 0 or null identify the tasks ID's are unlimited (the X variable) The user plays with visibility of baselines, e.g. he wants to see all tasks with all baselines labeled 1. This is done by updating a filter on the table. Right now I constantly have to recalculate TaskIndex after changing the filter (looping through records). It would be nice if TaskIndex could be calculated on the fly for each record knowing only the data in the current record (I work in Delphi where a client dataset has an OnCalcFields event handler, that is triggered for each record when necessary). I have no control over the inner workings of the visual component.

    Read the article

  • Google Mini Search Does Not Return Results

    - by James Lawruk
    I pointed the Google Mini to a small intranet site. (8 simple HTML pages) It crawls all 8 pages just fine, but a simple search with a word clearing contained within the body of the pages returns 0 results. If I enter a search like this: "site:mysite.net", it returns all 8 pages. If I enter a search with a word contained in a Url, it will return that Url. In the search results, only the Url is returned in the list items. There is no Page Title or blurb text you normally see in Google search results. It's as if it is only indexing the Url and skipping the page title, body, etc. I have an old software Version 3.4.14, but I wanted to try to get this thing working without an upgrade. It worked before for another domain, so why would it not work now. Any ideas?

    Read the article

  • Canonicals with differing content

    - by Jimbo Jonny
    Interesting conundrum here with canonicals. Lets say I have a site with a "verified" system where other websites can become so and so "verified". Their url to send people to to confirm verification is something like "blah.com/verify/company1" and "blah.com/verify/company2". But logically "blah.com/verify" itself is not verifying anyone in particular, so it redirects to the signup form to get verified, at "blah.com/verify/register" As far as the actual companies registered, I figure it doesn't make sense to index every individual url with only the tiny difference of which company name it's saying yay or nay to being verified, so canonicals could come in handy on those pages to condense the indexing. Yet making "blah.com/verify" the canonical "hub" doesn't work well because it's a signup form, not a verification page, so technically has quite different content from the various verification pages themselves. But at the same time it's a bit unfair to choose 1 company to point all the canonical benefits too to use that as the "hub", yet a bit wasteful to have google index every individual verification page and spread out all that linkjuice. Basically, I'm just looking for advice, what's best for this from a search engine standpoint?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >