Search Results

Search found 25625 results on 1025 pages for 'google documents'.

Page 372/1025 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Nhibernate - Getting Exception when run a simple join query

    - by Muhammad Akhtar
    hi, I am getting issue when I run sql Query having inner join, here is what I am doing very simple ISession session = NHibernateHelper.GetCurrentSession(); string query = string.Format("select Documents.TypeId from Documents inner join DocumentTrackingItems on Documents.Id = DocumentTrackingItems.DocumentId WHERE DocumentTrackingItems.ItemStepId = {0} order by Documents.TypeId asc", 13); System.Collections.ArrayList document = (System.Collections.ArrayList)session.CreateSQLQuery(query, "document", typeof(Document)).List(); I am getting this exception Exception Details: System.IndexOutOfRangeException: Id what's wrong in my query? --- thanks

    Read the article

  • How to remove control chars from UTF8 string

    - by Mimefilt
    Hi there, i have a VB.NET program that handles the content of documents. The programm handles high volumes of documents as "batch"(2Million documents;total 1TB volume) Some of this documents may contain control chars or chars like f0e8(http://www.fileformat.info/info/unicode/char/f0e8/browsertest.htm). Is there a easy and especially fast way to remove that chars?(except space,newline,tab,...) If the answer is regex: Has anyone a complete regex for me? Thanks!

    Read the article

  • Can anyone reproduce this bug in VS2010 RC???

    - by Raj Aththanayake
    VS2010 RC - The opened documents do not close (At first time VS loaded) when clicking on "Close All Documents" More info... https://connect.microsoft.com/VisualStudio/feedback/details/541954/vs2010-rc-the-open-documents-do-not-close-when-click-on-close-all-documents Can anyone reproduce this bug in VS2010 RC??? You need Windows XP running.

    Read the article

  • create Html anchor to file on the c drive

    - by GigaPr
    H could you tell me how to create a link to a file on the c drive(local machine) or a link to download a file from the hard drive this doesn t seems to work <a href="C:/Documents and Settings/Giga/My Documents/NetBeansProjects/JavaRssFeed/RssFeed/build/web/WEB-INF/Xml/Gaetano Feed.xml" class="font18">C:/Documents and Settings/Giga/My Documents/NetBeansProjects/JavaRssFeed/RssFeed/build/web/WEB-INF/Xml/Gaetano Feed.xml</a> thanks

    Read the article

  • SMB2 traffic crashes network?

    - by Phil Cross
    We've been having significant network slowdown issues over the past few weeks, primarily on a Friday morning. We run Windows 7 client machines, with Windows Server 2008 R2 servers. What generally happens is the network starts to slow down massively at 08:55 and resumes normal speeds at around 09:20 This affects everything on the network from logging on, resetting passwords, opening programs and files etc. On my client machine, Physical Memory usage remains at around 40% (normal) and CPU usage hovers around 0-10% idle. The servers show memory usage spikes massively and remains quite intense during the times mentioned above. I have taken several wireshark captures, both during the slowdown and when the network operates fine. One of the main things I noticed is the increase in SMB2 entries in the wireshark log during the slowdown. Record Time Source Destination Protocol Length Info 382 3.976460000 10.47.35.11 10.47.32.3 SMB2 362 Create Request File: pcross\My Documents 413 4.525047000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents 441 5.235927000 10.47.32.3 10.47.35.11 SMB2 298 Create Response File: pcross\My Documents\Downloads 442 5.236199000 10.47.35.11 10.47.32.3 SMB2 260 Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: *;Find Request File: pcross\My Documents\Downloads SMB2_FIND_ID_BOTH_DIRECTORY_INFO Pattern: * 573 6.327634000 10.47.35.11 10.47.32.3 SMB2 146 Close Request File: pcross\My Documents\Downloads 703 7.664186000 10.47.35.11 10.47.32.3 SMB2 394 Create Request File: pcross\My Documents\Downloads\WestlandsProspectus\P24 __ P21.pdf These are some of the SMB2 records from a list of a couple of hundred which original from my computer with a destination of the fileserver. One of the interesting things to note is the last entry in the examples above is for a PDF file. That file was not open anywhere on my computer, or on anyone elses. No folders with the files in were open either. When I took another capture when the network was running fine, there were hardly any SMB2 entries, and the ones that were displayed were mainly from Wireshark. We currently have around 800 computers, 90 Macs and 200 Laptops and Netbooks. Our concern is if this traffic is happening on my computer, is it happening on other computers, and if so, would those computers be adding to the slow network issues? Again, this only happens during certain times. We're pretty sure its not the our antivirus. Is there anything to narrow down whats initializing this SMB traffic during the particular times? Or if anyone has any extra advice, or links to resources it would be appreciate.

    Read the article

  • smtp.gmail.com from bash gives "Error in certificate: Peer's certificate issuer is not recognized."

    - by ndasusers
    I needed my script to email admin if there is a problem, and the company only uses Gmail. Following a few posts instructions I was able to set up mailx using a .mailrc file. there was first the error of nss-config-dir I solved that by copying some .db files from a firefox directory. to ./certs and aiming to it in mailrc. A mail was sent. However, the error above came up. By some miracle, there was a Google certificate in the .db. It showed up with this command: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI GeoTrust SSL CA ,, VeriSign Class 3 Secure Server CA - G3 ,, Microsoft Internet Authority ,, VeriSign Class 3 Extended Validation SSL CA ,, Akamai Subordinate CA 3 ,, MSIT Machine Auth CA 2 ,, Google Internet Authority ,, Most likely, it can be ignored, because the mail worked anyway. Finally, after pulling some hair and many googles, I found out how to rid myself of the annoyance. First, export the existing certificate to a ASSCII file: ~]$ certutil -L -n 'Google Internet Authority' -d certs -a > google.cert.asc Now re-import that file, and mark it as a trusted for SSL certificates, ala: ~]$ certutil -A -t "C,," -n 'Google Internet Authority' -d certs -i google.cert.asc After this, listing shows it trusted: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI ... Google Internet Authority C,, And mailx sends out with no hitch. ~]$ /bin/mailx -A gmail -s "Whadda ya no" [email protected] ho ho ho EOT ~]$ I hope it is helpful to someone looking to be done with the error. Also, I am curious about somethings. How could I get this certificate, if it were not in the mozilla database by chance? Is there for instance, something like this? ~]$ certutil -A -t "C,," \ -n 'gmail.com' \ -d certs \ -i 'http://google.com/cert/this...'

    Read the article

  • When I ping Internet addresses like yahoo or Google, I get 2 reply packets and 2 lost packets.

    - by navi
    I have Airtel broadband and a Tata broadband connection. i have around 50 PCs connecting through an airtel broadband connection. Both are dsl connections with my phone line going into dsl modems and an Ethernet cable going from dsl modem directly into a switch. Currently, only airtel connection is connected with static IP on my private lan and using the airtel ISP DNS servers as DNS IP address and the default gateway is 192.168.1.1 (IP add. of the dsl modem). All PCs are connected in a work group. When in full use, my users complain of certain web pages are not opening. When I ping Internet addresses like Yahoo or Google I get 2 reply packets and 2 lost packets. I suspect that a single broadband connection is not able to sustain 50 simultaneous downloads/browsing. Is there any device which connect to both DSL and make one line so that its give me high speed simultaneous browsing. Help needed urgently. Thank you all to those who reply.

    Read the article

  • How to delete a file that contains a backslash in the name under Windows 7?

    - by espinchi
    I want to delete a file named workspaces\google-gson-1.7.1-release.zip Yep, it contains a backslash in the name. Here it is: G:\>dir Z_DRIVE Volume in drive G is samsung Volume Serial Number is 48B9-7E1D Directory of G:\Z_DRIVE 04/06/2012 08:09 PM <DIR> . 04/06/2012 08:09 PM <DIR> .. 05/01/2011 02:21 PM 528,016 workspaces\google-gson-1.7.1-release.zip 1 File(s) 528,016 bytes 2 Dir(s) 88,400,478,208 bytes free The first attempt is to just delete it from the Windows Explorer, but it says it can't find the file. Then, I tried from the command line: G:\>del Z_DRIVE\workspaces\google-gson-1.7.1-release.zip The system cannot find the file specified. And, after researching a bit in the internets, I also tried the following, with no luck: G:\>del \\?\G:\Z_DRIVE\workspaces\google-gson-1.7.1-release.zip The system cannot find the file specified. Other than booting from some Linux CD, is there a way to get rid of this file? Update: also tried the following combinations, but the error is the same: G:\>del "\\?\G:\Z_DRIVE\workspaces\google-gson-1.7.1-release.zip" G:\Z_DRIVE>del workspaces\google-gson-1.7.1-release.zip G:\Z_DRIVE>del "workspaces\google-gson-1.7.1-release.zip" G:\Z_DRIVE>del workspaces*google-gson-1.7.1-release.zip

    Read the article

  • GWTAI applet integration in GWT problem

    - by andy
    Hi everybody. I'm working with gwtai to integrate a java applet into my gwt - project. Basic communication from my main application to the applet (such as invoking simple methods that return int or boolean values) works. But the main reason why I need to integrate this applet is, that I need it to connect to another server and receive a answer and pass it to my gwt-application. So there's one basic method in the applet: public String SendAndReceive(String host, int sendPort, int receivePort, String query) that connects to the server, receives an answer and returns this answer as a string. When I now try to invoke this method like this: applet.SendAndReceive("0.0.0.0", 9099, 2000, "show streams;"); I constantly run into following error (full error message at the end): com.google.gwt.core.client.JavaScriptException: (String): Error calling method on NPObject! [plugin exception: java.security.AccessControlException: access denied (java.util.PropertyPermission * read,write)] I couldn't find a solution (for gwtai is a quite uncommon topic), what I found out (and what the Exception let's one assume) is, that there's a security problem - maybe because I'm connecting to another server. I also read something about browser's Single Origin Policy, what would point in the same direction...up to now I have never worked with java applets. So if someone has a solution or a hint I would be very thankful. If more code is helpful I can give. Thanks, Andy the full error message: 21:03:49.864 [ERROR] [follovizergwt] Unable to load module entry point class follovizer.gwt.client.FolloVizerGWT (see associated exception for details) com.google.gwt.core.client.JavaScriptException: (String): Error calling method on NPObject! [plugin exception: java.security.AccessControlException: access denied (java.util.PropertyPermission * read,write)]. at com.google.gwt.dev.shell.BrowserChannelServer.invokeJavascript(BrowserChannelServer.java:195) at com.google.gwt.dev.shell.ModuleSpaceOOPHM.doInvoke(ModuleSpaceOOPHM.java:120) at com.google.gwt.dev.shell.ModuleSpace.invokeNative(ModuleSpace.java:507) at com.google.gwt.dev.shell.ModuleSpace.invokeNativeObject(ModuleSpace.java:264) at com.google.gwt.dev.shell.JavaScriptHost.invokeNativeObject(JavaScriptHost.java:91) at follovizer.gwt.client.AnduINAppletImpl.SendAndReceive(AnduINAppletImpl.java) at follovizer.gwt.client.FolloVizerGWT.createLayout(FolloVizerGWT.java:92) at follovizer.gwt.client.FolloVizerGWT.onModuleLoad(FolloVizerGWT.java:40) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:369) at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • CUDA not working in 64 bit windows 7

    - by Programmer
    I have cuda toolkit 4.0 installed in a 64 bit windows 7. I try building my cuda code, #include<iostream> #include"cuda_runtime.h" #include"cuda.h" __global__ void kernel(){ } int main(){ kernel<<<1,1>>>(); int c = 0; cudaGetDeviceCount(&c); cudaDeviceProp prop; cudaGetDeviceProperties(&prop, 0); std::cout<<"the name is"<<prop.name; std::cout<<"Hello World!"<<c<<std::endl; system("pause"); return 0; } but operation fails. Below is the build log: Build Log Rebuild started: Project: god, Configuration: Debug|Win32 Command Lines Creating temporary file "c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\BAT0000482007500.bat" with contents [ @echo off echo "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include" -maxrregcount=0 --compile -o "Debug/sample.cu.obj" sample.cu "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include" -maxrregcount=0 --compile -o "Debug/sample.cu.obj" "c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\sample.cu" if errorlevel 1 goto VCReportError goto VCEnd :VCReportError echo Project : error PRJ0019: A tool returned an error code from "Compiling with CUDA Build Rule..." exit 1 :VCEnd ] Creating command line """c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\BAT0000482007500.bat""" Creating temporary file "c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\RSP0000492007500.rsp" with contents [ /OUT:"C:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\Debug\god.exe" /LIBPATH:"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\lib\x64" /MANIFEST /MANIFESTFILE:"Debug\god.exe.intermediate.manifest" /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"C:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\Debug\god.pdb" /DYNAMICBASE /NXCOMPAT /MACHINE:X86 cudart.lib cuda.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib ".\Debug\sample.cu.obj" ] Creating command line "link.exe @"c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\RSP0000492007500.rsp" /NOLOGO /ERRORREPORT:PROMPT" Output Window Compiling with CUDA Build Rule... "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\bin\nvcc.exe" -gencode=arch=compute_10,code=\"sm_10,compute_10\" -gencode=arch=compute_20,code=\"sm_20,compute_20\" --machine 32 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\bin" -Xcompiler "/EHsc /W3 /nologo /O2 /Zi /MT " -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v4.0\include" -maxrregcount=0 --compile -o "Debug/sample.cu.obj" sample.cu sample.cu sample.cu.obj : error LNK2019: unresolved external symbol _cudaLaunch@4 referenced in function "enum cudaError cdecl cudaLaunch(char *)" (??$cudaLaunch@D@@YA?AW4cudaError@@PAD@Z) sample.cu.obj : error LNK2019: unresolved external symbol ___cudaRegisterFunction@40 referenced in function "void __cdecl _sti_cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1(void)" (?sti__cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1@@YAXXZ) sample.cu.obj : error LNK2019: unresolved external symbol _cudaRegisterFatBinary@4 referenced in function "void __cdecl _sti_cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1(void)" (?sti__cudaRegisterAll_52_tmpxft_00001c68_00000000_8_sample_compute_10_cpp1_ii_b81a68a1@@YAXXZ) sample.cu.obj : error LNK2019: unresolved external symbol _cudaGetDeviceProperties@8 referenced in function _main sample.cu.obj : error LNK2019: unresolved external symbol _cudaGetDeviceCount@4 referenced in function _main sample.cu.obj : error LNK2019: unresolved external symbol _cudaConfigureCall@32 referenced in function _main C:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\Debug\god.exe : fatal error LNK1120: 7 unresolved externals Results Build log was saved at "file://c:\Users\t-sudhk\Documents\Visual Studio 2008\Projects\god\god\Debug\BuildLog.htm" god - 8 error(s), 0 warning(s) I will be highly obliged if someone could help me. Thanks

    Read the article

  • How do I search for a string with quotes?

    - by every_answer_gets_a_point
    I am searching for the string <!--m--><li class="g w0"><h3 class=r><a href=" within the HTML source of this link: http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=Santarus+Inc? this is how I am searching for it: string html_string = "http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=" + biocompany; html = new WebClient().DownloadString(html_string); d=html.IndexOf(@"<!--m--><li class=""g w0""><h3 class=r><a href=""",1); For some reason it is finding an occurrence of it at position 45 (in other words d=45) but this incorrect. Here are the first couple hundred characters of the string HTML: <!doctype html><head><title>Santarus Inc&#8206; - Google Search</title><script>window.google={kEI:\"b6jES5nPD4rysQOokrGDDQ\",kEXPI:\"23729,24229,24249,24260,24414,24457\",kCSI:{e:\"23729,24229,24249,24260,24414,24457\",ei:\"b6jES5nPD4rysQOokrGDDQ\",expi:\"23729,24229,24249,24260,24414,24457\"},ml:function(){},kHL:\"en\",time:function(){return(new Date).getTime()},log:function(b,d,c){var a=new Image,e=google,g=e.lc,f=e.li;a.onerror=(a.onload=(a.onabort=function(){delete g[f]}));g[f]=a;c=c||\"/gen_204?atyp=i&ct=\"+b+\"&cad=\"+d+\"&zx=\"+google.time();a.src=c;e.li=f+1},lc:[],li:0,Toolbelt:{}};\nwindow.google.sn=\"web\";window.google.timers={load:{t:{start:(new Date).getTime()}}};try{}catch(u){}window.google.jsrt_kill=1;\n</script><style>body{background:#fff;color:#000;margin:3px 8px}#gbar,#guser{font-size:13px;padding-top:1px !important}#gbar{float:left;height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh

    Read the article

  • IE and Google Chrome timeout on an IIS6 hosted SSL page that Firefox handles well.

    - by Thomas
    Ok, here's the scenario: Up until a few weeks ago, none of us noticed anything wrong with the corporate website. People were using it without complaint. Then, a client complained that a specific page on the site was timing out for him, and only when he committed a POST action on a form filled with data. I checked it out, and it timed out for me, too. But, it only timed out in Google Chrome and IE, not in Firefox. Additionally, the same page, on the same server, but served from a different domain name (one not under the protection of SSL, either) does not time out under any browser. To clarify: https://www.mysite.com/changes.php times out on POST, but the same with http works fine. That distinction (SSL vs. Non-SSL) seems to be important, as nothing else has changed. Our certificate is valid, and Firefox detects no errors thrown by the page. I've looked at the Request and Response headers from the page, and they all follow the correct formats. Then, after wandering through the site, I noticed a few other things. Both IE and Chrome will frequently time out on any page that is PHP-based. They never time out on static images or html files. I've looked at the site from a variety of different servers, my home and work workstations, and my netbook. Because of that, I've discounted a viral infection, as I highly doubt a virus is going to hit every one of the machines to which I have access in exactly the same manner. My setup is: Server: Win2k3, II6, PHP 5.2.9-1. Clients: IE7, IE8, Chrome (regular and dev channel): Frequent timeouts on PHP pages. Firefox 2, Firefox 3: No timeouts. Firebug shows no errors or even lengthy periods serving the pages. I've spent 2 days searching for any tech knowledge that I can find, and my search parameters are all too general. Everyone has problems loading SSL pages in IE and Chrome for a wide variety of reasons. The infrequent nature of the timeouts and the fact that there are no errors being reported anywhere is starting to drive me insane. Does anyone have any insight on a problem like this?

    Read the article

  • Countour Mapping API

    - by Crash
    I have done some searching on Google, but haven't come up with much as of yet. I am wanting to take a set of point data, which I had previously been using to create weighted points for a heat map through the Google API, and turn them into a contour map to overlay on the Google Maps API. I haven't seen anything in Google's code that would let me do this. Does anyone know of a good API to create such an overlay? Or is there possibly something I have overlooked that Google offers?

    Read the article

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • Exploring the Excel Services REST API

    - by jamiet
    Over the last few years Analysis Services guru Chris Webb and I have been on something of a crusade to enable better access to data that is locked up in countless Excel workbooks that litter the hard drives of enterprise PCs. The most prominent manifestation of that crusade up to now has been a forum thread that Chris began on Microsoft Answers entitled Excel Web App API? Chris began that thread with: I was wondering whether there was an API for the Excel Web App? Specifically, I was wondering if it was possible (or if it will be possible in the future) to expose data in a spreadsheet in the Excel Web App as an OData feed, in the way that it is possible with Excel Services? Up to recently the last 10 words of that paragraph "in the way that it is possible with Excel Services" had completely washed over me however a comment on my recent blog post Thoughts on ExcelMashup.com (and a rant) by Josh Booker in which Josh said: Excel Services is a service application built for sharepoint 2010 which exposes a REST API for excel documents. We're looking forward to pros like you giving it a try now that Office365 makes sharepoint more easily accessible.  Can't wait for your future blog about using REST API to load data from Excel on Offce 365 in SSIS. made me think that perhaps the Excel Services REST API is something I should be looking into and indeed that is what I have been doing over the past few days. And you know what? I'm rather impressed with some of what Excel Services' REST API has to offer. Unfortunately Excel Services' REST API also has one debilitating aspect that renders this blog post much less useful than it otherwise would be; namely that it is not publicly available from the Excel Web App on SkyDrive. Therefore all I can do in this blog post is show you screenshots of what the REST API provides in Sharepoint rather than linking you directly to those REST resources; that's a great shame because one of the benefits of a REST API is that it is easily and ubiquitously demonstrable from a web browser. Instead I am hosting a workbook on Sharepoint in Office 365 because that does include Excel Services' REST API but, again, all I can do is show you screenshots. N.B. If anyone out there knows how to make Office-365-hosted spreadsheets publicly-accessible (i.e. without requiring a username/password) please do let me know (because knowing which forum on which to ask the question is an exercise in futility). In order to demonstrate Excel Services' REST API I needed some decent data and for that I used the World Tourism Organization Statistics Database and Yearbook - United Nations World Tourism Organization dataset hosted on Azure Datamarket (its free, by the way); this dataset "provides comprehensive information on international tourism worldwide and offers a selection of the latest available statistics on international tourist arrivals, tourism receipts and expenditure" and you can explore the data for yourself here. If you want to play along at home by viewing the data as it exists in Excel then it can be viewed here. Let's dive in.   The root of Excel Services' REST API is the model resource which resides at: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model Note that this is true for every workbook hosted in a Sharepoint document library - each Excel workbook is a RESTful resource. (Update: Mark Stacey on Twitter tells me that "It's turned off by default in onpremise Sharepoint (1 tickbox to turn on though)". Thanks Mark!) The data is provided as an ATOM feed but I have Firefox's feed reading ability turned on so you don't see the underlying XML goo. As you can see there are four top level resources, Ranges, Charts, Tables and PivotTables; exploring one of those resources is where things get interesting. Let's take a look at the Tables Resource: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables Our workbook contains only one table, called ‘Table1’ (to reiterate, you can explore this table yourself here). Viewing that table via the REST API is pretty easy, we simply append the name of the table onto our previous URI: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1') As you can see, that quite simply gives us a representation of the data in that table. What you cannot see from this screenshot is that this is pure HTML that is being served up; that is all well and good but actually we can do more interesting things. If we specify that the data should be returned not as HTML but as: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Tables('Table1')?$format=image then that data comes back as a pure image and can be used in any web page where you would ordinarily use images. This is the thing that I really like about Excel Services’ REST API – we can embed an image in any web page but instead of being a copy of the data, that image is actually live – if the underlying data in the workbook were to change then hitting refresh will show a new image. Pretty cool, no? The same is true of any Charts or Pivot Tables in your workbook - those can be embedded as images too and if the underlying data changes, boom, the image in your web page changes too. There is a lot of data in the workbook so the image returned by that previous URI is too large to show here so instead let’s take a look at a different resource, this time a range: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15') That URI returns cells A1 to C15 from a worksheet called “Data”: And if we ask for that as an image again: http://server/_vti_bin/ExcelRest.aspx/Documents/TourismExpenditureInMillionsOfUSD.xlsx/model/Ranges('Data!A1|C15')?$format=image Were this image resource not behind a username/password then this would be a live image of the data in the workbook as opposed to one that I had to copy and upload elsewhere. Nonetheless I hope this little wrinkle doesn't detract from the inate value of what I am trying to articulate here; that an existing image in a web page can be changed on-the-fly simply by inserting some data into an Excel workbook. I for one think that that is very cool indeed! I think that's enough in the way of demo for now as this shows what is possible using Excel Services' REST API. Of course, not all features work quite how I would like and here is a bulleted list of some of my more negative feedback: The URIs are pig-ugly. Are "_vti_bin" & "ExcelRest.aspx" really necessary as part of the URI? Would this not be better: http://server/Documents/TourismExpenditureInMillionsOfUSD.xlsx/Model/Tables(‘Table1’) That URI provides the necessary addressability and is a lot easier to remember. Discoverability of these resources is not easy, we essentially have to handcrank a URI ourselves. Take the example of embedding a chart into a blog post - would it not be better if I could browse first through the document library to an Excel workbook and THEN through the workbook to the chart/range/table that I am interested in? Call it a wizard if you like. That would be really cool and would, I am sure, promote this feature and cut down on the copy-and-paste disease that the REST API is meant to alleviate. The resources that I demonstrated can be returned as feeds as well as images or HTML simply by changing the format parameter to ?$format=atom however for some inexplicable reason they don't return OData and no-one on the Excel Services team can tell me why (believe me, I have asked). $format is an OData parameter however other useful parameters such as $top and $filter are not supported. It would be nice if they were. Although I haven't demonstrated it here Excel Services' REST API does provide a makeshift way of altering the data by changing the value of specific cells however what it does not allow you to do is add new data into the workbook. Google Docs allows this and was one of the motivating factors for Chris Webb's forum post that I linked to above. None of this works for Excel workbooks hosted on SkyDrive This blog post is as long as it needs to be for a short introduction so I'll stop now. If you want to know more than I recommend checking out a few links: Excel Services REST API documentation on MSDNSo what does REST on Excel Services look like??? by Shahar PrishExcel Services in SharePoint 2010 REST API Syntax by Christian Stich. Any thoughts? Let's hear them in the comments section below! @Jamiet 

    Read the article

  • The challenge of communicating externally with IRM secured content

    - by Simon Thorpe
    I am often asked by customers about how they handle sending IRM secured documents to external parties. Their concern is that using IRM to secure sensitive information they need to share outside their business, is troubled with the inability for third parties to install the software which enables them to gain access to the information. It is a very legitimate question and one i've had to answer many times in the past 10 years whilst helping customers plan successful IRM deployments. The operating system does not provide the required level of content security The problem arises from what IRM delivers, persistent security to your sensitive information where ever it resides and whenever it is in use. Oracle IRM gives customers an array of features that help ensure sensitive information in an IRM document or email is always protected and only accessed by authorized users using legitimate applications. Examples of such functionality are; Control of the clipboard, either by disabling completely in the opened document or by allowing the cut and pasting of information between secured IRM documents but not into insecure applications. Protection against programmatic access to the document. Office documents and PDF documents have the ability to be accessed by other applications and scripts. With Oracle IRM we have to protect against this to ensure content cannot be leaked by someone writing a simple program. Securing of decrypted content in memory. At some point during the process of opening and presenting a sealed document to an end user, we must decrypt it and give it to the application (Adobe Reader, Microsoft Word, Excel etc). This process must be secure so that someone cannot simply get access to the decrypted information. The operating system alone just doesn't have the functionality to deliver these types of features. This is why for every IRM technology there must be some extra software installed and typically this software requires administrative rights to do so. The fact is that if you want to have very strong security and access control over a document you are going to send to someone who is beyond your network infrastructure, there must be some software to provide that functionality. Simple installation with Oracle IRM The software used to control access to Oracle IRM sealed content is called the Oracle IRM Desktop. It is a small, free piece of software roughly about 12mb in size. This software delivers functionality for everything a user needs to work with an Oracle IRM solution. It provides the functionality for all formats we support, the storage and transparent synchronization of user rights and unique to Oracle, the ability to search inside sealed files stored on the local computer. In Oracle we've made every technical effort to ensure that installing this software is a simple as possible. In situations where the user's computer is part of the enterprise, this software is typically deployed using existing technologies such as Systems Management Server from Microsoft or by using Active Directory Group Policies. However when sending sealed content externally, you cannot automatically install software on the end users machine. You need to rely on them to download and install themselves. Again we've made every effort for this manual install process to be as simple as we can. Starting with the small download size of the software itself to the simple installation process, most end users are able to install and access sealed content very quickly. You can see for yourself how easily this is done by walking through our free and easy self service demonstration of using sealed content. How to handle objections and ensure there is value However the fact still remains that end users may object to installing, or may simply be unable to install the software themselves due to lack of permissions. This is often a problem with any technology that requires specialized software to access a new type of document. In Oracle, over the past 10 years, we've learned many ways to get over this barrier of getting software deployed by external users. First and I would say of most importance, is the content MUST have some value to the person you are asking to install software. Without some type of value proposition you are going to find it very difficult to get past objections to installing the IRM Desktop. Imagine if you were going to secure the weekly campus restaurant menu and send this to contractors. Their initial response will be, "why on earth are you asking me to download some software just to access your menu!?". A valid objection... there is no value to the user in doing this. Now consider the scenario where you are sending one of your contractors their employment contract which contains their address, social security number and bank account details. Are they likely to take 5 minutes to install the IRM Desktop? You bet they are, because there is real value in doing so and they understand why you are doing it. They want their personal information to be securely handled and a quick download and install of some software is a small task in comparison to dealing with the loss of this information. Be clear in communicating this value So when sending sealed content to people externally, you must be clear in communicating why you are using an IRM technology and why they need to install some software to access the content. Do not try and avoid the issue, you must be clear and upfront about it. In doing so you will significantly reduce the "I didn't know I needed to do this..." responses and also gain respect for being straight forward. One customer I worked with, 6 months after the initial deployment of Oracle IRM, called me panicking that the partner they had started to share their engineering documents with refused to install any software to access this highly confidential intellectual property. I explained they had to communicate to the partner why they were doing this. I told them to go back with the statement that "the company takes protecting its intellectual property seriously and had decided to use IRM to control access to engineering documents." and if the partner didn't respect this decision, they would find another company that would. The result? A few days later the partner had made the Oracle IRM Desktop part of their approved list of software in the company. Companies are successful when sending sealed content to third parties We have many, many customers who send sensitive content to third parties. Some customers actually sell access to Oracle IRM protected content and therefore 99% of their users are external to their business, one in particular has sold content to hundreds of thousands of external users. Oracle themselves use the technology to secure M&A documents, payroll data and security assessments which go beyond the traditional enterprise security perimeter. Pretty much every company who deploys Oracle IRM will at some point be sending those documents to people outside of the company, these customers must be successful otherwise Oracle IRM wouldn't be successful. Because our software is used by a wide variety of companies, some who use it to sell content, i've often run into people i'm sharing a sealed document with and they already have the IRM Desktop installed due to accessing content from another company. The future In summary I would say that yes, this is a hurdle that many customers are concerned about but we see much evidence that in practice, people leap that hurdle with relative ease as long as they are good at communicating the value of using IRM and also take measures to ensure end users can easily go through the process of installation. We are constantly developing new ideas to reducing this hurdle and maybe one day the operating systems will give us enough rich security functionality to have no software installation. Until then, Oracle IRM is by far the easiest solution to balance security and usability for your business. If you would like to evaluate it for yourselves, please contact us.

    Read the article

  • Categorized Document Management System

    - by cmptrgeekken
    At the company I work for, we have an intranet that provides employees with access to a wide variety of documents. These documents fall into several categories and subcategories, and each of these categories have their own web page. Below is one such page (each of the links shown will link to a similar view for that category): We currently store each document as a file on the web server and hand-code links to these documents whenever we need to add a new document. This is tedious and error-prone, and it also means we lack any sort of security for accessing these documents. I began looking into document management systems (like KnowledgeTree and OpenKM), however, none of these systems seem to provide a categorized view like in the preview above. My question is ... does anyone know of any Document Management System that allow for the type of flexibility we currently have with hand-coding links to our documents into various webpages (major and minor , while also providing security, ease of use, and (less important) version control? Or do you think I'd be better off developing such a system from scratch?

    Read the article

  • GDI+ is giving me errors

    - by user146780
    I want to use GDI + just to load a png. I included the headers and lib file then I do: Bitmap b; b.fromfile(filename); I get this from the compiler though. Error 1 error C2146: syntax error : missing ';' before identifier 'b' c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 23 Error 2 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 23 Error 3 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 23 Error 4 error C2440: '=' : cannot convert from 'const char [3]' to 'WCHAR *' c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 172 Error 5 error C2228: left of '.FromFile' must have class/struct/union c:\users\josh\documents\visual studio 2008\projects\vectorizer project\vectorizer project\vectorizer project.cpp 179 What is the correct way to do this? Thanks

    Read the article

  • What components of your site do you typically "offload" or embed?

    - by Chad
    Here's what I mean. In developing my ASP.NET MVC based site, I've managed to offload a great deal of the static file hosting and even some of the "work". Like so: jQuery for my javascript framework. Instead of hosting it on my site, I use the Google CDN Google maps, obviously "offloaded" - no real work being performed on my server - Google hosted jQueryUI framework - Google CDN jQueryUI CSS framework - Google CDN jQueryUI CSS framework themes - Google CDN So what I'm asking is this, other than what I've got listed... What aspects of your sites have you been able to offload, or embed, from outside services? Couple others that come to mind... OpenAuth - take much of the authentication process work off your site Google Wave - when it comes out, take communication work off of your site

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 2 of 4

    - by Kelly Jones
    In my last post, I explained why we decided to use a Click Once application to solve our business problem. To quickly review, we needed a way for our business users to upload documents to a SharePoint 2007 document library in mass, set the meta data, set the permissions per document, and to do so easily. Let’s look at the pieces that make up our solution.  First, we have the Windows Form application.  This app is deployed using Click Once and calls SharePoint web services in order to upload files and then calls web services to set the meta data (SharePoint columns and permissions).  Second, we have a custom action.  The custom action is responsible for providing our users a link that will launch the Windows app, as well as passing values to it via the query string.  And lastly, we have the web services that the Windows Form application calls.  For our solution, we used both out of the box web services and a custom web service in order to set the column values in the document library as well as the permissions on the documents. Now, let’s look at the technical details of each of these pieces.  (All of the code is downloadable from here: )   Windows Form application deployed via Click Once The Windows Form application, called “Custom Upload”, has just a few classes in it: Custom Upload -- the form FileList.xsd -- the dataset used to track the names of the files and their meta data values SharePointUpload -- this class handles uploading the file SharePointUpload uses an HttpWebRequest to transfer the file to the web server. We had to change this code from a WebClient object to the HttpWebRequest object, because we needed to be able to set the time out value.  public bool UploadDocument(string localFilename, string remoteFilename) { bool result = true; //Need to use an HttpWebRequest object instead of a WebClient object // so we can set the timeout (WebClient doesn't allow you to set the timeout!) HttpWebRequest req = (HttpWebRequest)WebRequest.Create(remoteFilename); try { req.Method = "PUT"; req.Timeout = 60 * 1000; //convert seconds to milliseconds req.AllowWriteStreamBuffering = true; req.Credentials = System.Net.CredentialCache.DefaultCredentials; req.SendChunked = false; req.KeepAlive = true; Stream reqStream = req.GetRequestStream(); FileStream rdr = new FileStream(localFilename, FileMode.Open, FileAccess.Read); byte[] inData = new byte[4096]; int bytesRead = rdr.Read(inData, 0, inData.Length); while (bytesRead > 0) { reqStream.Write(inData, 0, bytesRead); bytesRead = rdr.Read(inData, 0, inData.Length); } reqStream.Close(); rdr.Close(); System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse(); if (response.StatusCode != HttpStatusCode.OK && response.StatusCode != HttpStatusCode.Created) { String msg = String.Format("An error occurred while uploading this file: {0}\n\nError response code: {1}", System.IO.Path.GetFileName(localFilename), response.StatusCode.ToString()); LogWarning(msg, "2ACFFCCA-59BA-40c8-A9AB-05FA3331D223"); result = false; } } catch (Exception ex) { LogException(ex, "{E9D62A93-D298-470d-A6BA-19AAB237978A}"); result = false; } return result; } The class also contains the LogException() and LogWarning() methods. When the application is launched, it parses the query string for some initial values.  The query string looks like this: string queryString = "Srv=clickonce&Sec=N&Doc=DMI&SiteName=&Speed=128000&Max=50"; This Srv is the path to the server (my Virtual Machine is name “clickonce”), the Sec is short for security – meaning HTTPS or HTTP, the Doc is the shortcut for which document library to use, and SiteName is the name of the SharePoint site.  Speed is used to calculate an estimate for download speed for each file.  We added this so our users uploading documents would realize how long it might take for clients in remote locations (using slow WAN connections) to download the documents. The last value, Max, is the maximum size that the SharePoint site will allow documents to be.  This allowed us to give users a warning that a file is too large before we even attempt to upload it. Another critical piece is the meta data collection.  We organized our site using SharePoint content types, so when the app loads, it gets a list of the document library’s content types.  The user then select one of the content types from the drop down list, and then we query SharePoint to get a list of the fields that make up that content type.  We used both an out of the box web service, and one that we custom built, in order to get these values. Once we have the content type fields, we then add controls to the form.  Which type of control we add depends on the data type of the field.  (DateTime pickers for date/time fields, etc)  We didn’t write code to cover every data type, since we were working with a limited set of content types and field data types. Here’s a screen shot of the Form, before and after someone has selected the content types and our code has added the custom controls:     The other piece of meta data we collect is the in the upper right corner of the app, “Users with access”.  This box lists the different SharePoint Groups that we have set up and by checking the boxes, the user can set the permissions on the uploaded documents. All of this meta data is collected and submitted to our custom web service, which then sets the values on the documents on the list.  We’ll look at these web services in a future post. In the next post, we’ll walk through the Custom Action we built.

    Read the article

  • How to search phrase queries in inverted index structure?

    - by Mehdi Amrollahi
    If we want to search a query like this "t1 t2 t3" (t1,t2 ,t3 must be queued) in an inverted index structure , which ways could we do ? 1-First we search the "t1" term and find all documents that contains "t1" , then do this work for "t2" and then "t3" . Then find documents that positions of "t1" , "t2" and "t3" are next to each other . 2-First we search the "t1" term and find all documents that contains "t1" , then in all documents that we found , we search the "t2" and next , in the result of this , we find documents that contains "t3" . thanks .

    Read the article

  • How to move an element in a sorted list and keep the CouchDb write "atomic"

    - by karlthorwald
    I have elements of a list in couchdb documents. Let's say these are 3 elements in 3 documents: { "id" : "783587346", "type" : "aList", "content" : "joey", "sort" : 100.0 } { "id" : "358734ff6", "type" : "aList", "content" : "jill", "sort" : 110.0 } { "id" : "abf587346", "type" : "aList", "content" : "jack", "sort" : 120.0 } A view retrieves all "aList" documents and displays them sorted by "sort". Now I want to move the elements, when I want to move "jack" to the middle, I could do this atomic in one write and change it's sort key to 105.0. The view now returns the documents in the new sort order. After a lot of sorting I could end up with sort keys like 50.99999 and 50.99998 after some years and in extreme situations run out of digits? What can you recommend, is there a better way to do this? I'd rather keep the elements in seperate documents. Different users might edit different elements in parallel (which also can get tricky). Maybe there is a much better way?

    Read the article

  • Android v1.5 w/ browser data storage

    - by Sirber
    I'm trying to build an offline web application which can sync online if the network is available. I tryed jQuery jStore but the test page stop at "testing..." whitout result, then I tryed Google Gears which is supposed to be working on the phone but it gears is not found. if (window.google && google.gears) { google.gears.factory.getPermission(); // Database var db = google.gears.factory.create('beta.database'); db.open('cominar-compteurs'); db.execute('create table if not exists Lectures' + ' (ID_COMPTEUR int, DATE_HEURE timestamp, kWh float, Wmax float, VAmax float, Wcum float, VAcum float);'); } else { alert('Google Gears non trouvé.'); } the code does work on Google Chrome v5.

    Read the article

  • Apply limit in mapreduce function in php?

    - by Rohan Kumar
    How to apply limit in php, mongodb when using mapreduce function? I tried this $cmd=array(// codition array "mapreduce" => "user", "map" => $map, "reduce" => $reduce, "out" => array("inline" => 1), "limit"=>2 ); $db=connect(); $query = $db->command($cmd);// run command But its not working it gives 2 documents.I can't use limit on sub documents. If I have 100's of sub documents and then I want paging in sub documents.Then it fails.Is it possible to apply limit on sub documents?

    Read the article

  • Open ms word in "compare document" mode from command prompt.

    - by arraku-gmail-com
    I am working on a web project where client needs a functionality to first upload some MS word document & then he can compare any two of the uploaded documents. The idea I came up with is to first make the documents available using WEBDAV & then open both documents using command line with "Compare side by side" option. In this way he will be able to compare & modify two documents. The problem is, I am not able to find any command which can be run from command prompt to open two documents in compare mode. Also, if you know any other way to achieve this functionality then please share it with me. Thanks!

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >