Search Results

Search found 18786 results on 752 pages for 'document storage'.

Page 76/752 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Where can someone store >100GB of pictures online? [closed]

    - by sbi
    A person who is not very computer-savvy needs to store 130GB of photos. The key parameters are: an non-negligible probability that the company selling the storage will be existing, and the data accessible, for at least five years data should be considered safe once uploaded reasonable terms of service: google drive reserving the right to literally do anything they want with their user's data is not acceptable; the possibility that the CIA might look at those pictures is not considered a threat easy to use from Windows, preferably as a drive no nerve-wracking limitations ("cannot upload 10GB/day" or "files 500MB" etc.) that serve no purpose other than pushing the user to the next-higher price plan some upgrade plan: there's currently 10-30GB of new photos per year, with a tendency to increase, which might bust a 150GB limit next January ability to somehow sort the pictures: currently they are sorted into folders, but something alike (tags) would be just as good, if easy enough to apply of course, the pricing is important (although there's a reason this is the last bullet; reasonable data safety is considered more important) Nice to have, but not necessary features would be: additional features related to photos (thumbnail generation, album sharing etc.) access from web and other platforms than Windows (smart phones) Let me stress this again: The person in need of that is able to copy pictures from the camera to the computer, can copy files in the explorer, and uses a web email service. That's about it, there's almost no understanding of what happens under the hood.

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • XML Parsing Error: junk after document element

    - by Jake
    I am using the following script to generate a RSS feed for my site: <?php class RSS { public function RSS() { $root = $_SERVER['DOCUMENT_ROOT']; require_once ("../connect.php"); } public function GetFeed() { return $this->getDetails() . $this->getItems(); } private function dbConnect() { DEFINE ('LINK', mysql_connect (DB_HOST, DB_USER, DB_PASSWORD)); } private function getDetails() { $detailsTable = "rss_feed_details"; $this->dbConnect($detailsTable); $query = "SELECT * FROM ". $detailsTable ." WHERE feed_category = ''"; $result = mysql_db_query (DB_NAME, $query, LINK); while($row = mysql_fetch_array($result)) { $details = '<?xml version="1.0" encoding="ISO-8859-1" ?> <rss version="2.0"> <channel> <title>'. $row['title'] .'</title> <link>'. $row['link'] .'</link> <description>'. $row['description'] .'</description> <language>'. $row['language'] .'</language> '; } return $details; } private function getItems() { $itemsTable = "rss_posts"; $this->dbConnect($itemsTable); $query = "SELECT * FROM ". $itemsTable ." ORDER BY id DESC"; $result = mysql_db_query (DB_NAME, $query, LINK); $items = ''; while($row = mysql_fetch_array($result)) { $items .= '<item> <title>'. $row["title"] .'</title> <link>'. $row["link"] .'</link> <description><![CDATA['.$row["readable_date"]."<br /><br />".$row["description"]."<br /><br />".']]></description> </item>'; } $items .= '</channel> </rss>'; return $items; } } ?> The baffling thing is, the script works perfectly fine on my localhost but gives the following error on my remote server: XML Parsing Error: junk after document element Location: http://mysite.com/rss/main/ Line Number 2, Column 1:<b>Parse error</b>: syntax error, unexpected T_STRING in <b>/home/studentw/public_html/rss/global-reach/rssClass.php</b> on line <b>1</b><br /> ^ Can someone please tell me what's wrong?

    Read the article

  • Deserialize an XML file - an error in xml document (1,2)

    - by Lindsay Fisher
    I'm trying to deserialize an XML file which I receive from a vendor with XmlSerializer, however im getting this exception: There is an error in XML document (1, 2).InnerException Message "<delayedquotes xmlns=''> was not expected.. I've searched the stackoverflow forum, google and implemented the advice, however I'm still getting the same error. Please find the enclosed some content of the xml file: <delayedquotes id="TestData"> <headings> <title/> <bid>bid</bid> <offer>offer</offer> <trade>trade</trade> <close>close</close> <b_time>b_time</b_time> <o_time>o_time</o_time> <time>time</time> <hi.lo>hi.lo</hi.lo> <perc>perc</perc> <spot>spot</spot> </headings> <instrument id="Test1"> <title id="Test1">Test1</title> <bid>0</bid> <offer>0</offer> <trade>0</trade> <close>0</close> <b_time>11:59:00</b_time> <o_time>11:59:00</o_time> <time>11:59:00</time> <perc>0%</perc> <spot>0</spot> </instrument> </delayedquotes> and the code [Serializable, XmlRoot("delayedquotes"), XmlType("delayedquotes")] public class delayedquotes { [XmlElement("instrument")] public string instrument { get; set; } [XmlElement("title")] public string title { get; set; } [XmlElement("bid")] public double bid { get; set; } [XmlElement("spot")] public double spot { get; set; } [XmlElement("close")] public double close { get; set; } [XmlElement("b_time")] public DateTime b_time { get; set; } [XmlElement("o_time")] public DateTime o_time { get; set; } [XmlElement("time")] public DateTime time { get; set; } [XmlElement("hi")] public string hi { get; set; } [XmlElement("lo")] public string lo { get; set; } [XmlElement("offer")] public double offer { get; set; } [XmlElement("trade")] public double trade { get; set; } public delayedquotes() { } }

    Read the article

  • PHP returning part of the code document

    - by The.Anti.9
    I have a PHP page that does a couple of different things depending on what action is set to in the GET data. Depending, it is supposed to return some JSON, but instead of doing anything it is supposed to it returns the bottom half of the code document itself, starting in the middle of the line. Heres the snippit from where it starts: ... } elseif ($_GET['action'] == 'addtop') { if (!isset($_GET['pname']) || !isset($_GET['url']) || !isset($_GET['artist']) || !isset($_GET['album']) || !isset($_GET['file'])) { die('Error: Incomplete data!'); } if (!file_exists($_GET['pname'].".txt")) { die('Error: No such playlist!'); } $plist = json_decode(file_get_contents($_GET['pname'].".txt"), true); $fh = fopen($_GET['pname'].".txt", 'w') or die('Could not open playlist!'); array_push($plist, array("artist" => $_GET['artist'], "album" => $_GET['album'], "file" => $_GET['file'], "url" => $_GET['url'])); fwrite($fh,json_encode($plist)); } elseif ($_GET['action'] == 'delfromp') { ... And here is what I get when I go to the page: $_GET['artist'], "album" = $_GET['album'], "file" = $_GET['file'], "url" = $_GET['url'])); fwrite($fh,json_encode($plist)); } elseif ($_GET['action'] == 'delfromp') { if (!isset($_GET['pname']) || !isset($_GET['id'])) { die('Error: Incomplete data!'); } if (!file_exists($_GET['pname'].".txt")) { die('Error: No such playlist!'); } $plist = json_decode(file_get_contents($_GET['pname'].".txt"), true); $fh = fopen($_GET['pname'].".txt", 'w') or die('Could not open playlist!'); unset($plist[$_GET['id']]); $plist = array_values($plist); fwrite($fh,json_encode($plist)); } elseif ($_GET['action'] == 'readp') { if (!file_exists($_GET['pname'].".txt")) { die('Error: No such playlist!'); } $plist = json_decode(file_get_contents($_GET['pname'].".txt"), true); $arr = array("entries" = $plist); $json = json_encode($arr); echo $json; } elseif ($_GET['action'] == 'getps') { $plists = array(); if ($handle = opendir('Playlists')) { while (false !== ($playlist = readdir($handle))) { if ($playlist != "." && $playlist != "..") { array_push($plists, substr($playlist, 0, strripos($playlist, '.')-1)); } } } else { die('Error: Can\'T open playlists!'); } $arr = array("entries"=$plists); $json = json_encode($arr); echo $json; } else { die('Error: No such action!'); } ? It starts in the middle of the array_push(... line. I really can't think of what it is doing. Theres no echos anywhere around it. Any ideas?

    Read the article

  • OpenXML sdk Modify a sheet in my Excel document

    - by user465202
    hi! I create an empty template in excel. I would like to open the template and edit the document but I do not know how to change the existing sheet. That's the code: using (SpreadsheetDocument xl = SpreadsheetDocument.Open(filename, true)) { WorkbookPart wbp = xl.WorkbookPart; WorkbookPart workbook = xl.WorkbookPart; // Get the worksheet with the required name. // To be used to match the ID for the required sheet data // because the Sheet class and the SheetData class aren't // linked to each other directly. Sheet s = null; if (wbp.Workbook.Sheets.Elements().Count(nm = nm.Name == sheetName) == 0) { // no such sheet with that name xl.Close(); return; } else { s = (Sheet)wbp.Workbook.Sheets.Elements().Where(nm = nm.Name == sheetName).First(); } WorksheetPart wsp = (WorksheetPart)xl.WorkbookPart.GetPartById(s.Id.Value); Worksheet worksheet = new Worksheet(); SheetData sd = new SheetData(); //SheetData sd = (SheetData)wsp.Worksheet.GetFirstChild(); Stylesheet styleSheet = workbook.WorkbookStylesPart.Stylesheet; //SheetData sheetData = new SheetData(); //build the formatted header style UInt32Value headerFontIndex = util.CreateFont( styleSheet, "Arial", 10, true, System.Drawing.Color.Red); //build the formatted date style UInt32Value dateFontIndex = util.CreateFont( styleSheet, "Arial", 8, true, System.Drawing.Color.Black); //set the background color style UInt32Value headerFillIndex = util.CreateFill( styleSheet, System.Drawing.Color.Black); //create the cell style by combining font/background UInt32Value headerStyleIndex = util.CreateCellFormat( styleSheet, headerFontIndex, headerFillIndex, null); /* * Create a set of basic cell styles for specific formats... * If you are controlling your table then you can simply create the styles you need, * this set of code is still intended to be generic. */ _numberStyleId = util.CreateCellFormat(styleSheet, null, null, UInt32Value.FromUInt32(3)); _doubleStyleId = util.CreateCellFormat(styleSheet, null, null, UInt32Value.FromUInt32(4)); _dateStyleId = util.CreateCellFormat(styleSheet, null, null, UInt32Value.FromUInt32(14)); _textStyleId = util.CreateCellFormat(styleSheet, headerFontIndex, headerFillIndex, null); _percentageStyleId = util.CreateCellFormat(styleSheet, null, null, UInt32Value.FromUInt32(9)); util.AddNumber(xl, sheetName, (UInt32)3, "E", "27", _numberStyleId); util.AddNumber(xl, sheetName, (UInt32)3, "F", "3.6", _doubleStyleId); util.AddNumber(xl, sheetName, (UInt32)5, "L", "5", _percentageStyleId); util.AddText(xl, sheetName, (UInt32)5, "M", "Dario", _textStyleId); util.AddDate(xl, sheetName, (UInt32)3, "J", DateTime.Now, _dateStyleId); util.AddImage(xl, sheetName, imagePath, "Smile", "Smile", 30, 30); util.MergeCells(xl, sheetName, "D12", "F12"); //util.DeleteValueCell(spreadsheet, sheetName, "F", (UInt32)8); txtCellText.Text = util.GetCellValue(xl, sheetName, (UInt32)5, "M"); double number = util.GetCellDoubleValue(xl, sheetName, (UInt32)3, "E"); double numberD = util.GetCellDoubleValue(xl, sheetName, (UInt32)3, "F"); DateTime datee = util.GetCellDateTimeValue(xl, sheetName, (UInt32)3, "J"); //txtDoubleCell.Text = util.GetCellValue(spreadsheet, sheetName, (UInt32)3, "P"); txtPercentualeCell.Text = util.GetCellValue(xl, sheetName, (UInt32)5, "L"); string date = util.GetCellValue(xl, sheetName, (UInt32)3, "J"); double dateD = Convert.ToDouble(date); DateTime dateTime = DateTime.FromOADate(dateD); txtDateCell.Text = dateTime.ToShortDateString(); //worksheet.Append(sd); /* Columns columns = new Columns(); columns.Append(util.CreateColumnData(10, 10, 40)); worksheet.Append(columns); */ SheetProtection sheetProtection1 = new SheetProtection() { Sheet = true, Objects = true, Scenarios = true, SelectLockedCells = true, SelectUnlockedCells = true }; worksheet.Append(sheetProtection1); wsp.Worksheet = worksheet; wsp.Worksheet.Save(); xl.WorkbookPart.Workbook.Save(); xl.Close(); thanks!

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    Hi, A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • Partially Modifying an XML serialized document.

    - by Stacey
    I have an XML document, several actually, that will be editable via a front-end UI. I've discovered a problem with this approach (other than the fact that it is using xml files instead of a database... but I cannot change that right now). If one user makes a change while another user is in the process of making a change, then the second one's changes will overwrite the first. I need to be able to request objects from the xml files, change them, and then submit the changes back to the xml file without re-writing the entire file. I've got my entire xml access class posted here (which was formed thanks to wonderful help from stackoverflow!) using System; using System.Linq; using System.Collections; using System.Collections.Generic; namespace Repositories { /// <summary> /// A file base repository represents a data backing that is stored in an .xml file. /// </summary> public partial class Repository<T> : IRepository { /// <summary> /// Default constructor for a file repository /// </summary> public Repository() { } /// <summary> /// Initialize a basic repository with a filename. This will have to be passed from a context to be mapped. /// </summary> /// <param name="filename"></param> public Repository(string filename) { FileName = filename; } /// <summary> /// Discovers a single item from this repository. /// </summary> /// <typeparam name="TItem">The type of item to recover.</typeparam> /// <typeparam name="TCollection">The collection the item belongs to.</typeparam> /// <param name="expression"></param> /// <returns></returns> public TItem Single<TItem, TCollection>(Predicate<TItem> expression) where TCollection : IDisposable, IEnumerable<TItem> { using (var list = List<TCollection>()) { return list.Single(i => expression(i)); } } /// <summary> /// Discovers a collection from the repository, /// </summary> /// <typeparam name="TCollection"></typeparam> /// <returns></returns> public TCollection List<TCollection>() where TCollection : IDisposable { using (var list = System.Xml.Serializer.Deserialize<TCollection>(FileName)) { return (TCollection)list; } } /// <summary> /// Discovers a single item from this repository. /// </summary> /// <typeparam name="TItem">The type of item to recover.</typeparam> /// <typeparam name="TCollection">The collection the item belongs to.</typeparam> /// <param name="expression"></param> /// <returns></returns> public List<TItem> Select<TItem, TCollection>(Predicate<TItem> expression) where TCollection : IDisposable, IEnumerable<TItem> { using (var list = List<TCollection>()) { return list.Where( i => expression(i) ).ToList<TItem>(); } } /// <summary> /// Attempts to save an entire collection. /// </summary> /// <typeparam name="TCollection"></typeparam> /// <param name="collection"></param> /// <returns></returns> public Boolean Save<TCollection>(TCollection collection) { try { // load the collection into an xml reader and try to serialize it. System.Xml.XmlDocument xDoc = new System.Xml.XmlDocument(); xDoc.LoadXml(System.Xml.Serializer.Serialize<TCollection>(collection)); // attempt to flush the file xDoc.Save(FileName); // assume success return true; } catch { return false; } } internal string FileName { get; private set; } } public interface IRepository { TItem Single<TItem, TCollection>(Predicate<TItem> expression) where TCollection : IDisposable, IEnumerable<TItem>; TCollection List<TCollection>() where TCollection : IDisposable; List<TItem> Select<TItem, TCollection>(Predicate<TItem> expression) where TCollection : IDisposable, IEnumerable<TItem>; Boolean Save<TCollection>(TCollection collection); } }

    Read the article

  • Alfresco Community Edition Consultants

    - by Talkincat
    I am in the process of putting together an document management system based on Alfresco Community 3.2r2. Because Alfresco will not allow its partners to work with the Community edition, I have found it devilishly tricky to find consultants that specialize in Alfresco to help me with this project. Can anyone point me in the direction of someone that can help me get this system up an running? I will mostly need help with integrating Alfresco with Active Directory (LDAP passthrough, user/group sync and SSO) and performance tuning the system. Any help is greatly appreciated.

    Read the article

  • Alfresco Community Edition Consultants

    - by Talkincat
    I am in the process of putting together an document management system based on Alfresco Community 3.2r2. Because Alfresco will not allow its partners to work with the Community edition, I have found it devilishly tricky to find consultants that specialize in Alfresco to help me with this project. Can anyone point me in the direction of someone that can help me get this system up an running? I will mostly need help with integrating Alfresco with Active Directory (LDAP passthrough, user/group sync and SSO) and performance tuning the system. Any help is greatly appreciated.

    Read the article

  • Can somebody help me install this jBPM based workflow management suite?

    - by Eternal Saint
    1.Its a book Workflow Interface software available at sourceforge http://bookworkflowint.sourceforge.net/ Any instructions on installing and configuring it would be great especially in windows, however I can try Linux specific ones as well. I could not find any installation instructions. I posted this in stackoverflow by mistake and was directed here. 2.Can you guys suggest any good scanning/digitization workflow software (document imaging) that I can adopt to my scanner software? Infact even a simple one would do may be based on hotfolders. I just want to be able to track the uniqueid/barcode of the scanned book, its status so that its not scanned again. It books or manuscripts could be in millions page count. I thought of using some kind of generic bug tracking tool, just track a few fields, dont know if its the right choice Thank you very much

    Read the article

  • What are the differences between the Fujitsu ScanSnap S300 and the S1300?

    - by Techboy
    Please can you tell me what the technical differences are between the Fujitsu ScanSnap S300 and the S1300 scanners? S300 = http://www.fujitsu.com/emea/products/scanners/scansnap/tmpl_scanners_scansnap-S300.html S1300 = http://www.fujitsu.com/emea/products/scanners/scansnap/tmpl_scanners_scansnap-S1300.html The only difference I can see is that the S300 has a CCD Image Sensor and the S1300 has a CIS Image Sensor. Other than that, the resolutions, pages per minute and physical sizes of both models are exactly the same. If so, is there a reason why I would want the S1300 instead of the cheaper S300? I just want a scanner for duplex document scanning, for home use (documents, receipts, etc.).

    Read the article

  • Free Cloud Mind Map Solution

    - by Zekta Chan
    As a Software Engineer we had a lot of discussion on SE design. Although we sync every document on the development process on Google Doc, mind map just didn’t fit in Google doc yet. The best we can do is to store a copy and share it online. To me, Mind map is an ir-replaceable piece of tools (yet) as agile note taking tools. And an eFormat is even greater than a paper one, due to the portability and extensibility. Does anyone have a good solution on “cloud-sharing” mind map? (We are using FreeMind at the moment) Thanks

    Read the article

  • Allow users to view Word documents only and not be able to edit, copy or save them.

    - by Alexander
    Hello In a traditional Windows Server 2003 environment with AD, we have shared a folder for our policy documents (MS Word). These documents get edited/updated now and then by the administrator(principal of college). Users only have read-only access to the folder, but they can still save-as and then change the content. Sharepoint is a possible solution but not easy to implement. We also thought of using a CMS on Linux and installing Joomla to let users only view the docs with a document management system... but is it possible to automatically retrieve the policy folder on the network and convert or put it in a format that users can only view and not copy? We also thought of saving the docs to secure pdf format but the principal wants an automated system. Basically she just wants to work in Word and the policies must be available to staff members on the network. Any ideas? Much appreciated.

    Read the article

  • Sharepoint 2007 reset permission inheritance

    - by e-mre
    I have this SharePoint 2007 document library which has several levels of folders and files. Some folders in the middle of the hierarchy do not inherit permissions from their parents and have their unique permissions defined. It is a huge library and there are many folders like this. I am currently changing the permission model of my library and I want to reset all those unique permissions and have all of them inherit permissions from the library root. (Something like "Replace child object permissions" checkbox available in windows files system security window) If this is not possible, seeing a list of folders that have their unique permissions defined would also do.

    Read the article

  • how can i edit my admission form which i filled wrong . their is no other form is avilable now ...what i do ??? [closed]

    - by user60065
    Hi, I am a 2nd year student in graduation. Recently I filled an admission form for final year admission but it came back to me after 2 days because I had entered wrong information. I want to edit the wrong information and I have scanned the form. I am looking for a good online site where I can upload the scanned document and convert same into an editable format. I don’t mind paying. If any can point to a good site will be great & thanks in advance

    Read the article

  • is this correct use of jquery's document.ready?

    - by Haroldo
    The below file contains all the javascript for a page. Performance is the highest priority. Is this the most efficient way? Do all click/hover events need to to be inside the doc.ready? //DOCUMENT.READY EVENTS //--------------------------------------------------------------------------- $(function(){ // mark events as not loaded $('.event').data({ t1_loaded: false, t2_loaded: false, t3_loaded: false, art_req: false }); //mark no events have been clicked $('#wrap_right').data('first_click_made', false); // cal-block event click $('#cal_blocks div.event, #main_search div.event').live('click', function(){ var id = $(this).attr('id').split('e')[1]; event_click(id); }); // jq history $.historyInit(function(hash){ if(hash) { event_click(hash); } }); // search $('#search_input').typeWatch ({ callback: function(){ var q = $('#search_input').attr('value'); search(q); }, wait : 350, highlight : false, captureLength : 2 }); $('#search_input, #main_search div.close').live('click',function(){ $(this).attr("value",""); reset_srch_res(); }); $('#main_search').easydrag(); $('a.dialog').colorbox(); //TAB CLICK -> AJAX LOAD TAB $('#wrap_right .rs_tabs li').live('click', function(){ $this = $(this); var id = $('#wrap_right').data('curr_event'); var tab = parseInt($this.attr('rel')); //hide other tabs $('#rs_'+id+' .tab_body').hide(); //mark current(clicked) tab $('#rs_'+id+' .rs_tabs li').removeClass('curr_tab'); $this.addClass('curr_tab'); //is the tab already loaded and hidden? var loaded = $('#e'+id).data('t'+tab+'_loaded'); //console.log('id: '+id+', tab: '+tab+', loaded: '+loaded); if(loaded === true) { $('#rs_'+id+' .tab'+tab).show(); if (tab == 2) { art_requested(id); } } else { //ajax load in the tab $('#rs_'+id+' .tab'+tab).load('index_files/tab'+tab+'.php?id='+id, function(){ //after load callback if (tab == 1) { $('#rs_' + id + ' .frame').delay(600).fadeIn(600) }; if (tab == 2) { art_requested(id); } }); //mark tab as loaded $('#e'+id).data('t'+tab+'_loaded', true); //fade in current tab $('#rs_'+id+' .tab'+tab).show(); } }) }); // LOAD RS FUNCTIONS //--------------------------------------------------------------------------- function event_click(id){ window.location.hash = id; //mark current event $('#wrap_right').data('curr_event', id); //hide any other events if($('#wrap_right').data('first_click_made') === true) { $('#wrap_right .event_rs').hide(); } //frame loaded before? var loaded = $('#e'+id).data('t1_loaded'); if(loaded === true) { $('#rs_'+id).show(); } else { create_frame(id); } //open/load the first tab $('#rs_'+id+' .t1').click(); $('#wrap_right').data('first_click_made', true); $('#cal_blocks').scrollTo('#e'+id, 1000, {offset: {top:-220, left:0}}); } function create_frame(id){ var art = ents[id].art; var ven = ents[id].ven; var type = ents[id].gig_club; //select colours for tabs if(type == 1){ var label = 'gig';} else if(type == 2){ var label = 'club';} else if(type == 0){ var label = 'other';} //create rs container for this event var frame = '<div id="rs_'+id+'" class="event_rs">'; frame += '<div class="title_strip"></div>'; frame += '<div class="rs_tabs"><ul class="'+label+'"><li class="t1 nav_tab1 curr_tab hand" rel="1"></li>'; if(art == 1){frame += '<li class="t2 nav_tab2 hand" rel="2"></li>';} if(ven == 1){frame += '<li class="t3 nav_tab2 hand" rel="3"></li>';} frame += '</ul></div>'; frame += '<div id="rs_content"><div class="tab_body tab1" ></div>'; if(art == 1){frame += '<div class="tab_body tab2"></div>';} if(ven == 1){frame += '<div class="tab_body tab3"></div>';} frame += '</div>'; frame += '</div>'; $('#wrap_right').append(frame); //mark current event in cal-blocks $('#cal_blocks .event_sel').removeClass('event_sel'); $('#e'+id).addClass('event_sel'); if($('#wrap_right').data('first_click_made') === false) { $('#wrap_right').delay(500).slideDown(); $('#rs_'+id+' .rs_tabs').delay(800).fadeIn(); } }; // FUNCTIONS //--------------------------------------------------------------------------- //check to see if an artist has been requested function art_requested(id){ var art_req = $('#e'+id).data('art_req'); if(art_req !== false) { //alert(art_req); $('#art_'+art_req).click(); } } //scroll artist panes smoothly (scroll bars cause glitches otherwise) function before (){ if(!IE){$('#art_scrollable .bio_etc').css('overflow','-moz-scrollbars-none');} } function after (){ if(!IE){$('#art_scrollable .bio_etc').css('overflow','auto');} } function prep_media_carousel(){ //youtube and soundcloud player $("#rs_content .yt_scrollable a.yt, #rs_content .yt_scrollable a.sc").colorbox({ overlayClose : false, opacity : 0 }); $("#colorbox").easydrag(true); $('#cboxOverlay').remove(); } function make_carousel_scrollable(unique_id){ $('#scroll_'+unique_id).scrollable({ size:1, clickable:false, nextPage:'#r_'+unique_id, prevPage:'#l_'+unique_id }); } function check_l_r_arrows(total, counter, art_id){ //left arrow if(counter > 0) { $('#l_'+art_id).show(); $('#l_'+art_id+'_inactive').hide(); } else { $('#l_'+art_id).hide(); $('#l_'+art_id+'_inactive').show(); } //right arrow if(counter < total-3) { $('#r_'+art_id).show(); $('#r_'+art_id+'_inactive').hide(); } else { $('#r_'+art_id).hide(); $('#r_'+art_id+'_inactive').show(); } } function reset_srch_res(){ $('#main_search').fadeOut(400).children().remove(); } function search(q){ $.ajax({ type: 'GET', url: 'index_files/srch/search.php?q='+q, success: function(e) { $('#main_search').html(e).show(); } }); }

    Read the article

  • Parse XML document

    - by Neil
    I am trying to parse a remote XML document (from Amazon AWS): <ItemLookupResponse xmlns="http://webservices.amazon.com/AWSECommerceService/2009-03-31"> <OperationRequest> <RequestId>011d32c5-4fab-4c7d-8785-ac48b9bda6da</RequestId> <Arguments> <Argument Name="Condition" Value="New"></Argument> <Argument Name="Operation" Value="ItemLookup"></Argument> <Argument Name="Service" Value="AWSECommerceService"></Argument> <Argument Name="Signature" Value="73l8oLJhITTsWtHxsdrS3BMKsdf01n37PE8u/XCbsJM="></Argument> <Argument Name="MerchantId" Value="Amazon"></Argument> <Argument Name="Version" Value="2009-03-31"></Argument> <Argument Name="ItemId" Value="603084260089"></Argument> <Argument Name="IdType" Value="UPC"></Argument> <Argument Name="AWSAccessKeyId" Value="[myAccessKey]"></Argument> <Argument Name="Timestamp" Value="2010-06-14T15:03:27Z"></Argument> <Argument Name="ResponseGroup" Value="OfferSummary,ItemAttributes"></Argument> <Argument Name="SearchIndex" Value="All"></Argument> </Arguments> <RequestProcessingTime>0.0318510000000000</RequestProcessingTime> </OperationRequest> <Items> <Request> <IsValid>True</IsValid> <ItemLookupRequest> <Condition>New</Condition> <DeliveryMethod>Ship</DeliveryMethod> <IdType>UPC</IdType> <MerchantId>Amazon</MerchantId> <OfferPage>1</OfferPage> <ItemId>603084260089</ItemId> <ResponseGroup>OfferSummary</ResponseGroup> <ResponseGroup>ItemAttributes</ResponseGroup> <ReviewPage>1</ReviewPage> <ReviewSort>-SubmissionDate</ReviewSort> <SearchIndex>All</SearchIndex> <VariationPage>All</VariationPage> </ItemLookupRequest> </Request> <Item> <ASIN>B0000UTUNI</ASIN> <DetailPageURL>http://www.amazon.com/Garnier-Fructis-Fortifying-Conditioner-Minute/dp/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB0000UTUNI</DetailPageURL> <ItemLinks> <ItemLink> <Description>Technical Details</Description> <URL>http://www.amazon.com/Garnier-Fructis-Fortifying-Conditioner-Minute/dp/tech-data/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Add To Baby Registry</Description> <URL>http://www.amazon.com/gp/registry/baby/add-item.html%3Fasin.0%3DB0000UTUNI%26SubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Add To Wedding Registry</Description> <URL>http://www.amazon.com/gp/registry/wedding/add-item.html%3Fasin.0%3DB0000UTUNI%26SubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Add To Wishlist</Description> <URL>http://www.amazon.com/gp/registry/wishlist/add-item.html%3Fasin.0%3DB0000UTUNI%26SubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>Tell A Friend</Description> <URL>http://www.amazon.com/gp/pdp/taf/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>All Customer Reviews</Description> <URL>http://www.amazon.com/review/product/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> <ItemLink> <Description>All Offers</Description> <URL>http://www.amazon.com/gp/offer-listing/B0000UTUNI%3FSubscriptionId%3DAKIAIYPTKHCWTRWWPWBQ%26tag%3Dws%26linkCode%3Dxm2%26camp%3D2025%26creative%3D386001%26creativeASIN%3DB0000UTUNI</URL> </ItemLink> </ItemLinks> <ItemAttributes> <Binding>Health and Beauty</Binding> <Brand>Garnier</Brand> <EAN>0603084260089</EAN> <Feature>Helps restore strength and shine</Feature> <Feature>Penetrates deep to nourish, repair and rejuvenate</Feature> <Feature>Makes hair softer and more manageable without weighing it down</Feature> <ItemDimensions> <Weight Units="hundredths-pounds">40</Weight> </ItemDimensions> <Label>Garnier</Label> <ListPrice> <Amount>419</Amount> <CurrencyCode>USD</CurrencyCode> <FormattedPrice>$4.19</FormattedPrice> </ListPrice> <Manufacturer>Garnier</Manufacturer> <NumberOfItems>1</NumberOfItems> <ProductGroup>Health and Beauty</ProductGroup> <ProductTypeName>ABIS_DRUGSTORE</ProductTypeName> <Publisher>Garnier</Publisher> <Size>5.0 oz</Size> <Studio>Garnier</Studio> <Title>Garnier Fructis Fortifying Fortifying Deep Conditioner, 3 Minute Masque - 5 oz</Title> <UPC>603084260089</UPC> </ItemAttributes> <OfferSummary> <LowestNewPrice> <Amount>229</Amount> <CurrencyCode>USD</CurrencyCode> <FormattedPrice>$2.29</FormattedPrice> </LowestNewPrice> <TotalNew>7</TotalNew> <TotalUsed>0</TotalUsed> <TotalCollectible>0</TotalCollectible> <TotalRefurbished>0</TotalRefurbished> </OfferSummary> </Item> </Items> </ItemLookupResponse> I am trying to extract data from the XML stream using XPathDocument, but with no luck: WebRequest request = HttpWebRequest.Create(url); WebResponse response = request.GetResponse(); //XmlDocument doc = new XmlDocument(); XPathDocument Doc = new XPathDocument(response.GetResponseStream()); XPathNavigator nav = Doc.CreateNavigator(); XPathNodeIterator ListPrice = nav.Select("/ItemLookupResponse/Items/Item/ItemAttributes/ListPrice"); foreach (XPathNavigator node in ListPrice) { Response.Write(node.GetAttribute("Amount", NAMESPACE)); } What am I missing? Thanks in advance!!

    Read the article

  • ORM on iPhone. More simple than CoreData.

    - by Alexander Babaev
    The question is rather simple. I know that there is SQLite. There is Core Data also. But I need something in between. More object-oriented than SQLite API and simplier than Core Data. Main points are: I need access to stored entities only by id. No queries required. I need to store items of a single type, it means that I can use only one table if I choose SQLite. I want automatic object-relational conversion. Or object-storage if the storage is not relational. I can use object archiving, but I have to implement things (NSArchiver). But I want to write some kind of class and get persistence automatically. As it can be done with Hibernate/ActiveRecord/Core Data/etc. Thanks.

    Read the article

  • ServiceBus WorkerRole DiagnosticMonitor Error

    - by user1596485
    I have a WebRole and 2 ServiceBus WorkerRoles running, During the OnStart of the roles I get the following Exception: [System.ArgumentOutOfRangeException] invalid syntax for container log4net Parameter name: initialConfiguration Running Azure: ConfigurationManager version=1.7.0.3 ServiceBus version=1.7.0.1 Storage version=1.7.0.0 This occurs while running locally in the dev Azure environment and in the Cloud. All roles have the following Configurtion settings: <LocalStorage name="Log4Net" cleanOnRoleRecycle="true" sizeInMB="2048" /> All Roles have the following code in the OnStart: try { // Configure Disgnostics to poll Log file to Blob Storage var diagnosticsConfig = DiagnosticMonitor.GetDefaultInitialConfiguration(); diagnosticsConfig.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(5); diagnosticsConfig.Directories.DataSources.Add( new DirectoryConfiguration { Path = RoleEnvironment.GetLocalResource("Log4Net").RootPath, DirectoryQuotaInMB = 512, Container = "wad-WebRolelog4net" }); DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", diagnosticsConfig); } catch { OnStop(); return false; }

    Read the article

  • How to reliably map vSphere disks <-> Linux devices

    - by brianmcgee
    Task at hand After a virtual disk has been added to a Linux VM on vSphere 5, we need to identify the disks in order to automate the LVM storage provision. The virtual disks may reside on different datastores (e.g. sas or flash) and although they may be of the same size, their speed may vary. So I need a method to map the vSphere disks to Linux devices. Ideas Through the vSphere API, I am able to get the device info: Data Object Type: VirtualDiskFlatVer2BackingInfo Parent Managed Object ID: vm-230 Property Path: config.hardware.device[2000].backing Properties Name Type Value ChangeId string Unset contentId string "d58ec8c12486ea55c6f6d913642e1801" datastore ManagedObjectReference:Datastore datastore-216 (W5-CFAS012-Hybrid-CL20-004) deltaDiskFormat string "redoLogFormat" deltaGrainSize int Unset digestEnabled boolean false diskMode string "persistent" dynamicProperty DynamicProperty[] Unset dynamicType string Unset eagerlyScrub boolean Unset fileName string "[W5-CFAS012-Hybrid-CL20-004] l****9-000001.vmdk" parent VirtualDiskFlatVer2BackingInfo parent split boolean false thinProvisioned boolean false uuid string "6000C295-ab45-704e-9497-b25d2ba8dc00" writeThrough boolean false And on Linux I may read the uuid strings: [root@lx***** ~]# lsscsi -t [1:0:0:0] cd/dvd ata: /dev/sr0 [2:0:0:0] disk sas:0x5000c295ab45704e /dev/sda [3:0:0:0] disk sas:0x5000c2932dfa693f /dev/sdb [3:0:1:0] disk sas:0x5000c29dcd64314a /dev/sdc As you can see, the uuid string of disk /dev/sda looks somehow familiar to the string that is visible in the VMware API. Only the first hex digit is different (5 vs. 6) and it is only present to the third hyphen. So this looks promising... Alternative idea Select disks by controller. But is it reliable that the ascending SCSI Id also matches the next vSphere virtual disk? What happens if I add another DVD-ROM drive / USB Thumb drive? This will probably introduce new SCSI devices in between. Thats the cause why I think I will discard this idea. Questions Does someone know an easier method to map vSphere disks and Linux devices? Can someone explain the differences in the uuid strings? (I think this has something to do with SAS adressing initiator and target... WWN like...) May I reliably map devices by using those uuid strings? How about SCSI virtual disks? There is no uuid visible then... This task seems to be so obvious. Why doesn't Vmware think about this and simply add a way to query the disk mapping via Vmware Tools?

    Read the article

  • Delete field using curl in couchdb

    - by 2x2p1p
    Hi guys. I searched and didn't found, can I delete a field of a couchdb's document using curl ? The most I can do is delete a document :( curl -X DELETE http://localhost:5984/users/jack?rev=1-cee2abbbe4afefa9b3b5db10260c0c94 Thanks.

    Read the article

  • C++ Coding Style Conventions Doc

    - by uray
    I need to write some coding style convention document in C++ for my team, is there any example or reference how such document is made, what should I define? which convention is should be avoided? is there any C++ coding style standard defined somewhere? or care to share some if you have one? *note: I know its been asked many time, but what I need is something like this http://java.sun.com/docs/codeconv/html/CodeConvTOC.doc.html but specifically for C++

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >