Search Results

Search found 18786 results on 752 pages for 'document storage'.

Page 54/752 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Effecient organization of spare cables and hardware

    - by Jake Wharton
    As many of you also likely do, I have a growing collection of cables, hardware, and spare parts (screws, connectors, etc.). I'm looking to find a good system of organization so that everything isn't a tangled mess, mismatched, and potentially able to be damaged. Since the the three things listed above are all have varying sizes and degrees of delicacy this poises an interesting problem. Presently I have those cheap plastic storage bins you find at Wal-mart for everything. Cables that were once wrapped neatly have become tangled due to numerous "I know I have a cable for this" moments. Hardware is mixed in other bins with odds and ends with no protection from each other. NICs, CPUs, and HDDs are all interacting and likely causing damage. Finally there are stray parts sprinkled amongst these two both in plastic bags and loose. I'm looking to unify this storage into a controlled chaos. Here are my thoughts: Odds and ends are the easiest. Screws, connectors, and small electronic parts lend themselves perfectly to tackle boxes and jewelry boxes. Since these are usually dynamically compartmentalized I can adjust for the contents and label them on the outside or inside of the lid. Cables are easily wrangled with short velcro strips but that doesn't stop them from being all mixed in together. Hardware is the worst offender. Size, shape, and degree of delicacy changes with nearly every piece. I'm willing to sacrifice a bit of organization for a somewhat efficient manner. What are all your thoughts? What is the best type of tackle or jewelry box to use? Most of them are cheap and flimsy. Is there a better alternative? How can I organize cables to know exactly (within reason) where one is? What about associating cables with hardware (Wall adapter to router, etc.)? What kind of storage unit lends itself to all shapes of hardware? Do I need to separate by size or degree of delicacy for better organization?

    Read the article

  • VMware ESXi 4 On-Disk Data Deduplication - possible and supported?

    - by hurikhan77
    Environment: We are running multiple web, database, and application servers which usually share a pretty common installation (gentoo linux) and similar configuration in VMware ESXi 4. The differences are usually only some installed features or differing component versions. To create a new server, I usually choose the most similar (by features) running server, rsync a copy of it into freshly mounted filesystems, run grub, reconfigure and reboot. Problem: Over time this duplicates many on-disk data blocks which probably sums up to several 10's of gigabytes. I suppose if I could use a base system as template with the actual machines based on top of that, only writing changed blocks to some sort of "diff image", performance should improve (increased cache hit rate) and storage efficiency should increase (deduplicated storage space). This would be similar to what ESXi already supports for RAM deduplication (page sharing). Question: Is there any way to easily do this on ESXi 4? I already share the portage tree via NFS but this would not work for the rootfs.

    Read the article

  • Fast, reliable data transfers from/to China

    - by Nils
    We are a small company and we will need to transfer rather large amounts of data (10GB+ each time) between Europe and China in the near future. As many may have experienced, Internet connections to or from China can be rather unreliable and slow at times without any apparent reason. For example, while sending data to China via FTP generally works well, it can be painfully slow in the other direction. Currently, we are investigating new ways to have high transfer rates in both directions. So far we have tried: FTP (see above) FTP over VPN services (generally slower than direct connections) F2F (like Retroshare or Freenet - slow!!) Aspera (fast but expensive!) BitTorrent (unreachable end nodes, b/c of firewalls which we must not configure) We would like to try: Cloud storage (e.g. Amazon S3, Google Storage) - are those services always and reliably reachable from inside China? Point-to-Point VPN (currently not possible, b/c of the network, see above) I'd be especially grateful to hear from people who have already dealt with this kind of problem before.

    Read the article

  • What disk setup is needed / best practice for hypervisor-only servers?

    - by Luke404
    Planning to buy some servers to run an hypervisor (Citrix XenServer or VMware vSphere, still have to decide between the two) we'd like to boot off the local redundant SD card module offered by various vendors (eg. Dell, HP, etc...). The actual VMs will run from an existing iSCSI SAN (which, by the way, can't support booting the servers directly off the SAN). What are the reasons, if any, to choose completely diskless servers VS having some local storage? And what would be the guidelines to choose that local storage? (number of spindles, raid level, etc)

    Read the article

  • Network Drive Via Ethernet Port for Speed?

    - by Yar
    I have a Macbook with Firewire 400 and USB 2.0, so the only way I can get fast external storage is through the Ethernet port. A really fast firewire 800 drive on ANOTHER computer is actually much faster than the built-in drive (according to XBench). So I thought I would try to go one better and buy an ethernet-ready drive. I bought a Seagate GoFlex™ Home Network Storage System, and it seems like the only way to get it to work is to plug it into a router. Can this drive be used without a router (i.e., direct to computer)? Are there any drives that can be plugged directly into the ethernet port for fast access? I don't want the drive on my router: I want it on my computer. Ideally I'd need 7200rpm or faster, too... Update: Just chatted with Seagate and they said that this particular drive will not work that way. Will any others?

    Read the article

  • Any experience with SATA SAS Interposer Cards?

    - by korkman
    Driven by the current price difference between SATA and SAS disks on one side and the potentially bad behaviour of SATA disks in bigger storage arrays on the other side, I have found so-called SATA-to-SAS interposer cards. Advertised as "seamlessly adding SAS capabilities to existing SATA disk drives", I wonder if anyone here has had some experience with these or similar products. The major benefits I can identify are the increased cable voltage (if all drives are SAS connected), the ability to power-cycle the drive and multipath (if desired). Obviously the SATA drive will still have to be RAID edition. The question is: Do these cards indeed increase the overall reliability of a storage system, or will failing SATA disks cause trouble nevertheless? Edit: I'm not asking for hypothetical answers, only actual experience please. I'm well aware that the typical 10k SAS drive is more reliable (and better performing) than 7200 SATA drives. But how does a nearline SAS, which is phyiscally the same disk as its SATA counterpart, compare to the SATA version with interposer?

    Read the article

  • Expendable, Redundant, Easily recoverable

    - by MeIr
    I am desperate at this point, I have been looking for "Big storage" solution for a while on my own and I can't find anything that would suite my needs. But now push came to shove. Current situation: I have about 6TB data storage (already full) - Drobo. Yesterday Drobo died on me and it put me into bad situation - I can't recover my data without buying another Drobo. From extensive research online I realized that Drobo is not the safest bet and by now it seems very poor choice. I ordered new Drobo to try to get my data back, however I don't want to be in the same situation later and continuing using Drobo promises this event to re-occur. What I am looking for: 1) Inexpensive setup. 2) Dynamically extendable - add more drives and/or replace a drive with bigger capacity. 3) Redundant - setup against 1-3 drive failure, will depend on total number of drives. For the sake of argument let's assume for every 4 drives one should be able to fail without data loss. 4) Easy data recovery - let's say unforeseen happens, I would like to be able to recover information without buying new tools or replacements - example: new Drobo. 5) Should be USB or Network Attach Storage 6) No demand on speed. Doesn't have to be fast, I am not doing video editing on the setup. However if option exists, would be nice to have a decent speed. After thoughts: I reviewed few options and FreeNAS looks nice, but it doesn't have #2 - Dynamic extendability. There are work around with Pools but it seems a bit complicated and unnecessary. More over it seems like data safety is a big question - saw some horror stories. Please advise on what options I have and what seems like an optimal solution (if any). I don't care if it has to be Windows or Linux box or any other OS and/or software that has to run on top, but simple solution is more attractive. Thank you! P.S: Feel free to ignore "After thoughts".

    Read the article

  • What is the "real" difference between a NAS and NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over standard ethernet, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over NFS on servers?

    Read the article

  • Email server - Disk quota sizes - suggestions?

    - by Ian H
    Working out a new server for an agency of 200 Employees - with approx 240 email accounts. Internally I'm arguing with myself over the amount of drive space to allocate to each user for the disk quota, I'm just looking for suggestions. Once i have a quota size decided, it will define the solution for storage. I've had everything from 4 GB per account ( which i feel is being generous ) down to 500 Mb ( with is rather restrictive in today's day and age. ) Thing is 4 GB per acocunt is just under 1 TB of allocated storage for email alone. Does anyone follow a "rule of thumb" or have thoughts on this? thanks in advance

    Read the article

  • Amazon S3: allow users to upload on a restricted basis (per bucket maybe)?

    - by Tom
    Hi there, I'm thinking about signing up to the Amazon S3 storage service. What I want to do is create a service where other people can register their own bucket with a certain amount of storage. These users will install my software, which then uploads their files. Of course, the users may only upload what they have paid for. For this to work I would like to create a separate bucket for each customer, each with its own properties. Question 1: is this possible with the API? How? This means that the installed software must have the rights needed to upload to my Amazon S3 account. Question 2: can I create individual authentication IDs for each bucket or customer, so that they can only upload with restrictions I have set? Thanks in advance.

    Read the article

  • What is the "real" difference between a NAS and NFS? Or, why pick a NAS device over "mere" NFS?

    - by warren
    From an end-user perspective, what is the difference between a NAS device and using NFS exports from a file server? They seem to accomplish the same end result. The difference between a SAN and other file storage is related (in my experience) to how they are connected to the server infrastructure. However, the difference between a NAS, connecting over a standard ethernet port, and NFS (sharing storage off specific servers, also over the network), seems more nebulous. Is there a good reason to pick a NAS filer over just running NFS on servers?

    Read the article

  • Explained: EF 6 and “Could not determine storage version; a valid storage connection or a version hint is required.”

    - by Ken Cox [MVP]
    I have a legacy ASP.NET 3.5 web site that I’ve upgraded to a .NET 4 web application. At the same time, I upgraded to Entity Framework 6. Suddenly one of the pages returned the following error: [ArgumentException: Could not determine storage version; a valid storage connection or a version hint is required.]    System.Data.SqlClient.SqlVersionUtils.GetSqlVersion(String versionHint) +11372412    System.Data.SqlClient.SqlProviderServices.GetDbProviderManifest(String versionHint) +91    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +92 [ProviderIncompatibleException: The provider did not return a ProviderManifest instance.]    System.Data.Common.DbProviderServices.GetProviderManifest(String manifestToken) +11431433    System.Data.Metadata.Edm.Loader.InitializeProviderManifest(Action`3 addError) +11370982    System.Data.EntityModel.SchemaObjectModel.Schema.HandleAttribute(XmlReader reader) +216 A search of the error message didn’t turn up anything helpful except that someone mentioned that the error messages was bogus in his case. The page in question uses the ASP.NET EntityDataSource control, consumed by a Telerik RadGrid. This is a fabulous combination for putting a huge amount of functionality on a page in a very short time. Unfortunately, the 6.0.1 release of EF6 doesn’t support EntityDataSource. According to the people in charge, support is planned but there’s no timeline for an EntityDataSource build that works with EF6.  I’m not sure what to do in the meantime. Should I back out EF6 or manually wire up the RadGrid? The upshot is that you might want to rethink plans to upgrade to Entity Framework 6 for Web forms projects if they rely on that handy control. It might also help to spend a User voice vote here:  http://data.uservoice.com/forums/72025-entity-framework-feature-suggestions/suggestions/3702890-support-for-asp-net-entitydatasource-and-dynamicda

    Read the article

  • Architecture for a business objects / database access layer

    - by gregmac
    For various reasons, we are writing a new business objects/data storage library. One of the requirements of this layer is to separate the logic of the business rules, and the actual data storage layer. It is possible to have multiple data storage layers that implement access to the same object - for example, a main "database" data storage source that implements most objects, and another "ldap" source that implements a User object. In this scenario, User can optionally come from an LDAP source, perhaps with slightly different functionality (eg, not possible to save/update the User object), but otherwise it is used by the application the same way. Another data storage type might be a web service, or an external database. There are two main ways we are looking at implementing this, and me and a co-worker disagree on a fundamental level which is correct. I'd like some advice on which one is the best to use. I'll try to keep my descriptions of each as neutral as possible, as I'm looking for some objective view points here. Business objects are base classes, and data storage objects inherit business objects. Client code deals with data storage objects. In this case, common business rules are inherited by each data storage object, and it is the data storage objects that are directly used by the client code. This has the implication that client code determines which data storage method to use for a given object, because it has to explicitly declare an instance to that type of object. Client code needs to explicitly know connection information for each data storage type it is using. If a data storage layer implements different functionality for a given object, client code explicitly knows about it at compile time because the object looks different. If the data storage method is changed, client code has to be updated. Business objects encapsulate data storage objects. In this case, business objects are directly used by client application. Client application passes along base connection information to business layer. Decision about which data storage method a given object uses is made by business object code. Connection information would be a chunk of data taken from a config file (client app does not really know/care about details of it), which may be a single connection string for a database, or several pieces connection strings for various data storage types. Additional data storage connection types could also be read from another spot - eg, a configuration table in a database that specifies URLs to various web services. The benefit here is that if a new data storage method is added to an existing object, a configuration setting can be set at runtime to determine which method to use, and it is completely transparent to the client applications. Client apps do not need to be modified if data storage method for a given object changes. Business objects are base classes, data source objects inherit from business objects. Client code deals primarily with base classes. This is similar to the first method, but client code declares variables of the base business object types, and Load()/Create()/etc static methods on the business objects return the appropriate data source-typed objects. The architecture of this solution is similar to the first method, but the main difference is the decision about which data storage object to use for a given business object is made by the business layer, not the client code. I know there are already existing ORM libraries that provide some of this functionality, but please discount those for now (there is the possibility that a data storage layer is implemented with one of these ORM libraries) - also note I'm deliberately not telling you what language is being used here, other than that it is strongly typed. I'm looking for some general advice here on which method is better to use (or feel free to suggest something else), and why.

    Read the article

  • jQuery document.ready + Asp.Net ContentPlaceholder cause Visual Studio intellisence problems

    - by Konstantin
    Hi! I want to execute JavaScript when document is ready without much syntax overhead. The idea is to use Site.Master and ContentPlaceholder: <script type="text/javascript"> $(document).ready(function () { <asp:ContentPlaceHolder ID="OnReadyScript" runat="server" /> }); </script> and in inherited pages just write plain code: <asp:Content ID="Content3" ContentPlaceHolderID="OnReadyScript" runat="server"> $("#Login").focus(); </asp:Content> It works fine but Visual Studio complains and gives warnings. Warning in master page is Expected expression at the line <asp:ContentPlaceHolder. In inherited pages warning is Could not find 'OnReadyScript' in the current master page or pages. I tried using Writer.Write in master page to render script tag and wrapping code: <% Writer.Write(@"<script type=""text/javascript"">$(document).ready(function () {"); %> <asp:ContentPlaceHolder ID="OnReadyScrit" runat="server" /> <% Writer.Write(@"});"); %> but page rendering terminates after opening script tag is rendered. Html basically ends with <script type="text/javascript"> How can I make it work?

    Read the article

  • Using Word COM objects in .NET, InlineShapes not copied from template to document

    - by Keith
    Using .NET and the Word Interop I am programmatically creating a new Word doc from a template (.dot) file. There are a few ways to do this but I've chosen to use the AttachedTemplate property, as such: Dim oWord As New Word.Application() oWord.Visible = False Dim oDocuments As Word.Documents = oWord.Documents Dim oDoc As Word.Document = oDocuments.Add() oDoc.AttachedTemplate = sTemplatePath oDoc.UpdateStyles() (I'm choosing the AttachedTemplate means of doing this over the Documents.Add() method because of a memory leak issue I discovered when using Documents.Add() to open from templates.) This works fine EXCEPT when there is an image (represented as an InlineShape) in the template footer. In that case the image does not appear in the resulting document. Specifically the image should appear in the oDoc.Sections.Item(1).Footers.Item(WdHeaderFooterIndex.wdHeaderFooterPrimary).Range.InlineShapes collection but it does not. This is not a problem when using Documents.Add(), however as I said that method is not an option for me. Is there an extra step I have to take to get the images from the template? I already discovered that when using AttachedTemplate I have to explicitly call UpdateStyles() (as you can see in my code snippet) to apply the template styles to the document, whereas that is done automatically when using Documents.Add(). Or maybe there's some crazy workaround? Your help is much appreciated! :)

    Read the article

  • Programming tips for writing document editors?

    - by Tesserex
    I'm asking this because I'm in the process of writing two such editors for my Mega Man engine, one a tileset editor, and another a level editor. When I say document editor, I mean the superset application type for things like image editors and text editors. All of these share things like toolbars, menu options, and in the case of image editors, and my apps, tool panes. We all know there's tons of advice out there for interface design in these apps, but I'm wondering about programming advice. Specifically, I'm doubting my code designs with the following things: Many menu options toggle various behaviors. What's the proper way to reliably tie the checked state of the option with the status of the behavior? Sometimes it's more complicated, like options being disabled when there's no document loaded. More and more consensus seems to be against using MDI, but how should I control tool panes? For example, I can't figure out how to get the panels to minimize and maximize along with the main window, like Photoshop does. When tool panels are responsible for a particular part of the document, who actually owns that thing? The main window, or the panel class? How do you do communication between the tool panels and the main window? Currently mine is all event based but it seems like there could be a better way. This seems to be a common class of gui application, but I've never seen specific pointers on code design for them. Could you please offer whatever advice or experience you have for writing them?

    Read the article

  • Newly created Document library and columns using webservices are not visible on sharepoint

    - by Royson
    Hi, for creating a columns I worked on this code . and for creating document library Lists listService = new Lists(); listService.PreAuthenticate = true; listService.Credentials = new NetworkCredential(username,password,domain; String url = "http://YourServer/SiteName/"; listService.Url = url @ + /_vti_bin/lists.asmx"; XmlNode ndList = listService.AddList(NewListName, "Description", 101); Both are working successfully. But Problem i am facing is: New Columns and document library are not visible. I tried with comparing Field Value of Both Visible and No-Visible types. Difference i found is : Visible (Created Manually) doesn't contain Version value. were as i am creating have it. Can you help me out in this? EDIT: I checked contents of ndList node, List is created and it is visible on my UI. but on sharepoint it should be listed in 'Document' tab where default 'Shared Documents' library is shown. If i click on 'Documents' then we can also see all lib created by this code. Visible means library displayed under 'Documents' tab

    Read the article

  • Not indent the first paragraph of a LaTeX document

    - by Andrew
    In the standard LaTeX article class (and probably others as well), paragraph indentation follows standard American publishing norms of not indenting the first paragraph after a section{} or subsection{}. I've redefined \maketitle in a LaTeX document and put the actual title left-aligned as the last line, fairly close to the actual text (kind of like this) Author Date Title Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Section title Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Since the title is left-aligned and so close to the text, I'd like the first paragraph of the document to not be indented, just like with the headings ... Title Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor... ... I've attempted to use @afterindentfalse, which is what the section commands use, inside my renewed commands, but it doesn't work. \makeatletter \def\noindentation{\let\@afterindentfalse} \newcommand{\mytitle}[1]{% \vskip 2em {\bf\sffamily\LARGE #1} \noindentation} \renewcommand{\@maketitle}{ \begin{flushleft}{ % Author \@author \par % Date \@date \par % Title \mytitle{\@title} } \end{flushleft} } \makeatother By default the first paragraph in the article class is indented, so this question is applicable whether or not I renew \maketitle. So, what's the best way to automatically not indent the first paragraph of the document? Thanks!

    Read the article

  • How to use javascript class from within document ready

    - by Richard
    Hi, I have this countdown script wrapped as an object located in a separate file Then when I want to setup a counter, the timeout function in the countdown class can not find the object again that I have setup within the document ready. I sort of get that everything that is setup in the document ready is convined to that scope, however it is possible to call functions within other document ready´s. Does anyone has a solution on how I could setup multiple counters slash objects. Or do those basic javascript classes have to become plugins This is some sample code on how the class begins function countdown(obj) { this.obj = obj; this.Div = "clock"; this.BackColor = "white"; this.ForeColor = "black"; this.TargetDate = "12/31/2020 5:00 AM"; this.DisplayFormat = "%%D%% Days, %%H%% Hours, %%M%% Minutes, %%S%% Seconds."; this.CountActive = true; this.DisplayStr; this.Calcage = cd_Calcage; this.CountBack = cd_CountBack; this.Setup = cd_Setup; } thanks, Richard

    Read the article

  • Create xml document in java applet.

    - by zproxy
    If I try to create a new xml document in a java applet by this code: http://java.sun.com/j2se/1.4.2/docs/api/javax/xml/parsers/DocumentBuilderFactory.html#newInstance() DocumentBuilderFactory.newInstance(); I will get this error: Java Plug-in 1.6.0_19 Using JRE version 1.6.0_19-b04 Java HotSpot(TM) Client VM javax.xml.parsers.FactoryConfigurationError: Provider <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> not found at javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source) I do not care about DTD's. Why is it looking for it? How am I supposed to create a xml document in java applets? How can I make it work? The enclosing html document looks like this: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <title>Loading...</title> </head> Can some comment this thread? The problem was with the entity resolver, which points to the w3c.org web site. The access to the reference DTDs on this site has been restricted for application use. The solution was to implement my own entity resolver. Related: http://forums.sun.com/thread.jspa?threadID=515055 http://stackoverflow.com/questions/1016286/org-apache-xerces-jaxp-saxparserfactoryimpl-not-found-when-importing-gears-api-in http://java.itags.org/java-desktop/4839/

    Read the article

  • Use continue or Checked Exceptions when checking and processing objects

    - by Johan Pelgrim
    I'm processing, let's say a list of "Document" objects. Before I record the processing of the document successful I first want to check a couple of things. Let's say, the file referring to the document should be present and something in the document should be present. Just two simple checks for the example but think about 8 more checks before I have successfully processed my document. What would have your preference? for (Document document : List<Document> documents) { if (!fileIsPresent(document)) { doSomethingWithThisResult("File is not present"); continue; } if (!isSomethingInTheDocumentPresent(document)) { doSomethingWithThisResult("Something is not in the document"); continue; } doSomethingWithTheSucces(); } Or for (Document document : List<Document> documents) { try { fileIsPresent(document); isSomethingInTheDocumentPresent(document); doSomethingWithTheSucces(); } catch (ProcessingException e) { doSomethingWithTheExceptionalCase(e.getMessage()); } } public boolean fileIsPresent(Document document) throws ProcessingException { ... throw new ProcessingException("File is not present"); } public boolean isSomethingInTheDocumentPresent(Document document) throws ProcessingException { ... throw new ProcessingException("Something is not in the document"); } What is more readable. What is best? Is there even a better approach of doing this (maybe using a design pattern of some sort)? As far as readability goes my preference currently is the Exception variant... What is yours?

    Read the article

  • Liferay Document Management System Workflow

    - by Rajkumar
    I am creating a DMS in Liferay. So far I could upload documents in Liferay in document library. And also i can see documents in document and media portlet. The problem is though status for the document is in pending state, the workflow is not started. Below is my code. Anyone please help. Very urgent. Folder folder = null; // getting folder try { folder = DLAppLocalServiceUtil.getFolder(10181, 0, folderName); System.out.println("getting folder"); } catch(NoSuchFolderException e) { // creating folder System.out.println("creating folder"); try { folder = DLAppLocalServiceUtil.addFolder(userId, 10181, 0, folderName, description, serviceContext); } catch (PortalException e3) { // TODO Auto-generated catch block e3.printStackTrace(); } catch (SystemException e3) { // TODO Auto-generated catch block e3.printStackTrace(); } } catch (PortalException e4) { // TODO Auto-generated catch block e4.printStackTrace(); } catch (SystemException e4) { // TODO Auto-generated catch block e4.printStackTrace(); } // adding file try { System.out.println("New File"); fileEntry = DLAppLocalServiceUtil.addFileEntry(userId, 10181, folder.getFolderId(), sourceFileName, mimeType, title, "testing description", "changeLog", sampleChapter, serviceContext); Map<String, Serializable> workflowContext = new HashMap<String, Serializable>(); workflowContext.put("event",DLSyncConstants.EVENT_CHECK_IN); DLFileEntryLocalServiceUtil.updateStatus(userId, fileEntry.getFileVersion().getFileVersionId(), WorkflowConstants.ACTION_PUBLISH, workflowContext, serviceContext); System.out.println("after entry"+ fileEntry.getFileEntryId()); } catch (DuplicateFileException e) { } catch (PortalException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (SystemException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } } catch (PortalException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (SystemException e) { // TODO Auto-generated catch block e.printStackTrace(); } } return fileEntry.getFileEntryId(); } I have even used WorkflowHandlerRegistryUtil.startWorkflowInstance(companyId, userId, fileEntry.getClass().getName(), fileEntry.getClassPK, fileEntry, serviceContext); But still i have the same problem

    Read the article

  • Does the "Supporting Multiple Screens" document contradict itself?

    - by Neil Traft
    In the Supporting Multiple Screens document in the Android Dev Guide, some example screen configurations are given. One of them states that the small-ldpi designation is given to QVGA (240x320) screens with a physical size of 2.6"-3.0". According to this DPI calculator, a 2.8" QVGA display equates to 143 dpi. However, further down the page the document explicitly states that all screens over 140 dpi are considered "medium" density. So which is it, ldpi or mdpi? Is this a mistake? Does anyone know what the HTC Tattoo or similar device actually reports? I don't have access to any devices like this. Also, with the recent publishing of this document, I'm glad to see we finally have an explicit statement of the exact DPI ranges of the three density categories. But why haven't we been given the same for the small, medium, and large screen size categories? I'd like to know the exact ranges for all these. Thanks in advance for your help!

    Read the article

  • How to stream pdf document from servlet?

    - by Kumar
    Hi,I am creating pdf document using jasper report and i need to stream that pdf document from servlet.Can anyone help me where i did mistake.This is the code snippet which i am using in my application. ServletOutputStream servletOutputStream = response.getOutputStream(); String fileName="test.pdf"; response.setContentType("application/pdf"); response.setHeader("Content-Disposition","attachment; filename=\"" + fileName + "\""); response.setHeader("Cache-Control", "no-cache"); try { Map parameters = new HashMap(); parameters.put("SUBREPORT_DIR", JasperReportFilepath); parameters.put("TestId", testID); JasperPrint jprint=JasperFillManager.fillReport(filePath, parameters, conn); byte[] output=JasperExportManager.exportReportToPdf(jprint); System.out.println("Size====>"+output.length); servletOutputStream.write(output); servletOutputStream.flush(); servletOutputStream.close(); System.out.println("===============>Streaming perfectly"); } catch(Exception e) { System.out.println("===============>+JasperException"+e.getMessage()); } and i could not get any error message also.Everything is working fine but document is not streaming. Please help me to sort out the problem.

    Read the article

  • How to make an AJAX call immediately on document loading

    - by Ankur
    I want to execute an ajax call as soon as a document is loaded. What I am doing is loading a string that contains data that I will use for an autocomplete feature. This is what I have done, but it is not calling the servlet. I have removed the calls to the various JS scripts to make it clearer. I have done several similar AJAX calls in my code but usually triggered by a click event, I am not sure what the syntax for doing it as soon as the document loads, but I thought this would be it (but it's not): <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <script type="text/javascript"> $(document).ready(function(){ $.ajax({ type: "GET", url: "AutoComplete", dataType: 'json', data: queryString, success: function(data) { var dataArray = data; alert(dataArray); } }); $("#example").autocomplete(dataArray); }); </script> <title></title> </head> <body> API Reference: <form><input id="example"> (try "C" or "E")</form> </body> </html>

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >