Search Results

Search found 4616 results on 185 pages for 'lists'.

Page 128/185 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Mysql slowing down my application

    - by user2985991
    my application is taking ages to load because of my database isnt located on my computer.. Anyone have any idea to how improve my performance? public Form1() { Splash splash = new Splash(); splash.Show(); InitializeComponent(); Load(); public void Load() { db.SelectTeam(); db.SelectMatches(); } On db.SelectTeam and SelectMatches I get everything I need from mysql and put into lists... Sorry if it's confusing, but I don't know what to do, and sorry for my bad english EDIT: Here are the querys string query = "SELECT * FROM teams ORDER BY name"; string query = "SELECT * FROM matches ORDER BY date ASC";

    Read the article

  • How much does precomputation (matching a series of strings and their permutations with a set number

    - by nipun
    Consider a typical slots machine with n reels(say reel1: a,b,c,d,w1,d,b, ..etc). On play we generate a concatenated string of n objects (like for above, chars) We have a paytable which lists winning strings with payout amounts. The problem is a wild character (list of wilds: w1,w2) which can replace {w1:a,b,c},{w2:a} ..etc. Is it really worthwhile to have all possible winning strings permutations with the wilds precomputed and used or simply at the time of occurance, generate all combinations with the pattern in hand accordingly. I did'nt really see much difference initially, but now if I need to scale the machine to handle 11+ reels with a much higher concentration of wilds than previously, I need to figure out the exact approach for this particular bit. Any ideas will be really appreciated :)

    Read the article

  • trying to work through a list in sections

    - by user1714887
    I have a list of lists sorted by the second value of the list (the groups). I now need to iterate through this to work on each "group" at a time. the data is [name, group, data1, data2, data3, data4]. I wasn't sure if I need a while or some other sort of loop, or maybe groupby but I've never used that. any help would be appreciated. for i in range (int(max_group)): x1 = [] x2 = [] x3 = [] x4 = [] if data[i][1] == i+1: x1.append(data[2]) x2.append(data[3]) x3.append(data[4]) x4.append(data[5]) print x1 print 'next' # these are just to test where we're at

    Read the article

  • Postgresql has broken apt-get on Ubuntu

    - by Raphie Palefsky-Smith
    On ubuntu 12.04, whenever I try to install a package using apt-get I'm greeted by: The following packages have unmet dependencies: postgresql-9.1 : Depends: postgresql-client-9.1 but it is not going to be instal led E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a so lution). apt-get install postgresql-client-9.1 generates: The following packages have unmet dependencies: postgresql-client-9.1 : Breaks: postgresql-9.1 (< 9.1.6-0ubuntu12.04.1) but 9.1.3-2 is to be installed apt-get -f install and apt-get remove postgresql-9.1 both give: Removing postgresql-9.1 ... * Stopping PostgreSQL 9.1 database server * Error: /var/lib/postgresql/9.1/main is not accessible or does not exist ...fail! invoke-rc.d: initscript postgresql, action "stop" failed. dpkg: error processing postgresql-9.1 (--remove): subprocess installed pre-removal script returned error exit status 1 Errors were encountered while processing: postgresql-9.1 E: Sub-process /usr/bin/dpkg returned an error code (1) So, apt-get is crippled, and I can't find a way out. Is there any way to resolve this without a re-install? EDIT: apt-cache show postgresql-9.1 returns: Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11164 Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.6-0ubuntu12.04.1 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.6-0ubuntu12.04.1) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.6-0ubuntu12.04.1) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.6-0ubuntu12.04.1_amd64.deb Size: 4298270 MD5sum: 9ee2ab5f25f949121f736ad80d735d57 SHA1: 5eac1cca8d00c4aec4fb55c46fc2a013bc401642 SHA256: 4e6c24c251a01f1b6a340c96d24fdbb92b5e2f8a2f4a8b6b08a0df0fe4cf62ab Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11164 Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.5-0ubuntu12.04 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.5-0ubuntu12.04) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.5-0ubuntu12.04) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.5-0ubuntu12.04_amd64.deb Size: 4298028 MD5sum: 3797b030ca8558a67b58e62cc0a22646 SHA1: ad340a9693341621b82b7f91725fda781781c0fb SHA256: 99aa892971976b85bcf6fb2e1bb8bf3e3fb860190679a225e7ceeb8f33f0e84b Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11220 Maintainer: Martin Pitt <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.3-2 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.3-2) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.3-2) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.3-2_amd64.deb Size: 4284744 MD5sum: bad9aac349051fe86fd1c1f628797122 SHA1: a3f5d6583cc6e2372a077d7c2fc7adfcfa0d504d SHA256: e885c32950f09db7498c90e12c4d1df0525038d6feb2f83e2e50f563fdde404a Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server

    Read the article

  • Our embedded linux system won't recognize a USB Device if it is plugged in before powerup. Suggestions?

    - by Blaine
    We are developing on a small embedded device. This device us a gumstix overo board running OpenEmbedded linux. We have our development almost completely done, and have run into the strangest of bugs that we can't figure out. We have a USB Device (Spectrophotometer) that has a USB2.0 Connection and an external power supply for the light source. Typical behavior is that you plug in the power supply, then the USB connection to a host. When the usb connection is detected by the device, the device boots up and enables the light source and fan. The device is then able to be used by the host system. The problem is that if the device is plugged into the Gumstix before we turn on the Gumstix, the USB Device apparently is not probed by the system (and hence does not turn on). Under a normal situation, when the connection is initialized by plugging in the usb cable, the spectro turns itself on and becomes available to the system (this can be seen via "lsusb" typically). Neither of these things are happening. There is no device detected via "lsusb" and no dmesg errors of any kind that we can see. It is as if the device is not plugged in. The device does show up and work fine if we unplug the USB cable and plug it back in once the system is booted up. It turns on and shows up on the usb bus, and we can access it with our driver. On any other desktop or laptop, it does not matter if the host system is on or off when we plug in the spectrometer. This behavior is what I would consider to be "normal" - that the usb system is probed and initialized at boot time, and the usb devices come online. In other words, our system is fully functional as long as we plug in the usb device after the system is booted up. Unfortunately this isn't possible in our final product - everything comes on at once. Additional Info: 1) We have tried a flash drive attached to the system when the system is turned off. Booting up the system brings the flash drive online, as expected 2) There are no messages regarding the spectro or usb device (using dmesg). "lsusb" only lists the USB hubs / controllers. It is literally as if the device is not present and not plugged in. 3) We have tried a brand new image from gumstix and an older image from last year. Both images have this problem. This problem exists on all 3 gumstix devices we use. Does anyone have any suggestions? From what I can tell it isn't really possible to do a complete "reboot" of the usb system that is a complete emulation of "unplugging" and "replugging" a usb device. I feel like what is happening is that there is no initial probe on the usb bus that would trigger the usb handshaking, but this is somehow specific to the spectro. This seems to be a kernel issue or at least an issue in how the kernel is initializing the usb subsystem. I'm not really sure though. I have tried the gumstix mailing list, but there doesn't seem to be anyone who has seen this issue before. Any advice or suggestions on where to start looking would be fantastic. Thank you! Blaine output etc. $ uname -a Linux overo 2.6.33 #1 Tue Apr 27 08:35:38 PDT 2010 armv7l GNU/Linux When the system is up and running and spectro is plugged in (working as intended), this is lsusb: Bus 001 Device 116: ID 2457:1022 Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x2457 idProduct 0x1022 bcdDevice 0.02 iManufacturer 1 USB4000 1.01.11 iProduct 2 Ocean Optics USB4000 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 46 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 400mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 4 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 0 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x86 EP 6 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0000 (Bus Powered) dmesg output: usb usb1: usb auto-resume hub 1-0:1.0: hub_resume usb usb2: usb auto-resume ehci-omap ehci-omap.0: resume root hub hub 1-0:1.0: state 7 ports 1 chg 0000 evt 0000 hub 2-0:1.0: hub_resume hub 2-0:1.0: state 7 ports 3 chg 0000 evt 0000 hub 1-0:1.0: hub_suspend usb usb1: bus auto-suspend hub 2-0:1.0: hub_suspend usb usb2: bus auto-suspend ehci-omap ehci-omap.0: suspend root hub usb usb2: usb resume ehci-omap ehci-omap.0: resume root hub hub 2-0:1.0: hub_resume ehci-omap ehci-omap.0: GetStatus port 2 status 001803 POWER sig=j CSC CONNECT hub 2-0:1.0: port 2: status 0501 change 0001 hub 2-0:1.0: state 7 ports 3 chg 0004 evt 0000 hub 2-0:1.0: port 2, status 0501, change 0000, 480 Mb/s ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: new high speed USB device using ehci-omap and address 2 ehci-omap ehci-omap.0: port 2 high speed ehci-omap ehci-omap.0: GetStatus port 2 status 001005 POWER sig=se0 PE CONNECT usb 2-2: default language 0x0409 usb 2-2: udev 2, busnum 2, minor = 129 usb 2-2: New USB device found, idVendor=2457, idProduct=1022 usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-2: Product: Ocean Optics USB4000 usb 2-2: Manufacturer: USB4000 1.01.11 usb 2-2: uevent usb 2-2: usb_probe_device usb 2-2: configuration #1 chosen from 1 choice usb 2-2: uevent usb 2-2: adding 2-2:1.0 (config #1, interface 0) usb 2-2:1.0: uevent drivers/usb/core/inode.c: creating file '002' dmesg has nothing to say, and lusb simply lists nothing else but the two default usb controllers / hubs if we plug the device in before the system is turned on.

    Read the article

  • Running lmgrd on ubuntu 14.04 LTS

    - by SumanBhatR
    I have installed Xilinx 14.7 in ubuntu 14.04 LTS machine(i386 - 64bit). But I am unable to run lmgrd (for starting the license server). When I googled this problem, I found that lsb-core package needs to be installed. But the package is having many dependencies, I want to know how to install lsb-core package with all the necessary dependencies. Thanks for the help On running sudo apt-get install lsb-core I got the following output Reading package lists... Done Building dependency tree Reading state information... Done Package lsb-core is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'lsb-core' has no installation candidate So I downloaded lsb-core package from http://packages.ubuntu.com/trusty/misc/lsb-core site and used "sudo dpkg -i ./lsb-core_4.1+Debian11ubuntu6_i386.deb" to install it By doing it, I got the following output Selecting previously unselected package lsb-core. (Reading database ... 163205 files and directories currently installed.) Preparing to unpack .../lsb-core_4.1+Debian11ubuntu6_i386.deb ... Unpacking lsb-core (4.1+Debian11ubuntu6) ... dpkg: dependency problems prevent configuration of lsb-core: lsb-core depends on libc6 ( 2.3.5). lsb-core depends on libz1. lsb-core depends on libncurses5. lsb-core depends on libpam0g. lsb-core depends on lsb-invalid-mta (= 4.1+Debian11ubuntu6) | mail-transport-agent. lsb-core depends on at. lsb-core depends on binutils. lsb-core depends on cron | cron-daemon. lsb-core depends on libc6-dev | libc-dev. lsb-core depends on locales. lsb-core depends on m4. lsb-core depends on mailx | mailutils. lsb-core depends on ncurses-term. lsb-core depends on pax. lsb-core depends on psmisc. lsb-core depends on alien (= 8.36). lsb-core depends on python3. lsb-core depends on lsb-security (= 4.1+Debian11ubuntu6). lsb-core depends on time. dpkg: error processing package lsb-core (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: lsb-core So I want to know how to install lsb-core package with all the necessary dependencies in one go. Thanks for the help

    Read the article

  • How to create item in SharePoint2010 document library using SharePoint Web service

    - by ybbest
    Today, I’d like to show you how to create item in SharePoint2010 document library using SharePoint Web service. Originally, I thought I could use the WebSvcLists(list.asmx) that provides methods for working with lists and list data. However, after a bit Googling , I realize that I need to use the WebSvcCopy (copy.asmx).Here are the code used private const string siteUrl = "http://ybbest"; private static void Main(string[] args) { using (CopyWSProxyWrapper copyWSProxyWrapper = new CopyWSProxyWrapper(siteUrl)) { copyWSProxyWrapper.UploadFile("TestDoc2.pdf", new[] {string.Format("{0}/Shared Documents/TestDoc2.pdf", siteUrl)}, Resource.TestDoc, GetFieldInfos().ToArray()); } } private static List<FieldInformation> GetFieldInfos() { var fieldInfos = new List<FieldInformation>(); //The InternalName , DisplayName and FieldType are both required to make it work fieldInfos.Add(new FieldInformation { InternalName = "Title", Value = "TestDoc2.pdf", DisplayName = "Title", Type = FieldType.Text }); return fieldInfos; } Here is the code for the proxy wrapper. public class CopyWSProxyWrapper : IDisposable { private readonly string siteUrl; public CopyWSProxyWrapper(string siteUrl) { this.siteUrl = siteUrl; } private readonly CopySoapClient proxy = new CopySoapClient(); public void UploadFile(string testdoc2Pdf, string[] destinationUrls, byte[] testDoc, FieldInformation[] fieldInformations) { using (CopySoapClient proxy = new CopySoapClient()) { proxy.Endpoint.Address = new EndpointAddress(String.Format("{0}/_vti_bin/copy.asmx", siteUrl)); proxy.ClientCredentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials; proxy.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; CopyResult[] copyResults = null; try { proxy.CopyIntoItems(testdoc2Pdf, destinationUrls, fieldInformations, testDoc, out copyResults); } catch (Exception e) { System.Console.WriteLine(e); } if (copyResults != null) System.Console.WriteLine(copyResults[0].ErrorMessage); System.Console.ReadLine(); } } public void Dispose() { proxy.Close(); } } You can download the source code here . ******Update********** It seems to be a bug that , you can not set the contentType when create a document item using Copy.asmx. In sp2007 the field type was Choice, however, in sp2010 it is actually Computed. I have tried using the Computed field type with no luck. I have also tried sending the ContentTypeId and this does not work.You might have to write your own web services to handle this.You can check my previous blog on how to get started with you own custom WCF in SP2010 here. References: SharePoint 2010 Web Services SharePoint2007 Web Services SharePoint MSDN Forum

    Read the article

  • Dynamic Unpivot : SSIS Nugget

    - by jamiet
    A question on the SSIS forum earlier today asked: I need to dynamically unpivot some set of columns in my source file. Every month there is one new column and its set of Values. I want to unpivot it without editing my SSIS packages that is deployed Let’s be clear about what we mean by Unpivot. It is a normalisation technique that basically converts columns into rows. By way of example it converts something like this: AccountCode Jan Feb Mar AC1 100.00 150.00 125.00 AC2 45.00 75.50 90.00 into something like this: AccountCode Month Amount AC1 Jan 100.00 AC1 Feb 150.00 AC1 Mar 125.00 AC2 Jan 45.00 AC2 Feb 75.50 AC2 Mar 90.00 The Unpivot transformation in SSIS is perfectly capable of carrying out the operation defined in this example however in the case outlined in the aforementioned forum thread the problem was a little bit different. I interpreted it to mean that the number of columns could change and in that scenario the Unpivot transformation (and indeed the SSIS dataflow in general) is rendered useless because it expects that the number of columns will not change from what is specified at design-time. There is a workaround however. Assuming all of the columns that CAN exist will appear at the end of the rows, we can (1) import all of the columns in the file as just a single column, (2) use a script component to loop over all the values in that “column” and (3) output each one as a column all of its own. Let’s go over that in a bit more detail.   I’ve prepared a data file that shows some data that we want to unpivot which shows some customers and their mythical shopping lists (it has column names in the first row): We use a Flat File Connection Manager to specify the format of our data file to SSIS: and a Flat File Source Adapter to put it into the dataflow (no need a for a screenshot of that one – its very basic). Notice that the values that we want to unpivot all exist in a column called [Groceries]. Now onto the script component where the real work goes on, although the code is pretty simple: Here I show a screenshot of this executing along with some data viewers. As you can see we have successfully pulled out all of the values into a row all of their own thus accomplishing the Dynamic Unpivot that the forum poster was after. If you want to run the demo for yourself then I have uploaded the demo package and source file up to my SkyDrive: http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100529/Dynamic%20Unpivot.zip Simply extract the two files into a folder, make sure the Connection Manager is pointing to the file, and execute! Hope this is useful. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • NHibernate 2 Beginner's Guide Review

    - by Ricardo Peres
    OK, here's the review I promised a while ago. This is a beginner's introduction to NHibernate, so if you have already some experience with NHibernate, you will notice it lacks a lot of concepts and information. It starts with a good description of NHibernate and why would we use it. It goes on describing basic mapping scenarios having primary keys generated with the HiLo or Identity algorithms, without actually explaining why would we choose one over the other. As for mapping, the book talks about XML mappings and provides a simple example of Fluent NHibernate, comparing it to its XML counterpart. When it comes to relations, it covers one-to-many/many-to-one and many-to-many, not one-to-one relations, but only talks briefly about lazy loading, which is, IMO, an important concept. Only Bags are described, not any of the other collection types. The log4net configuration description gets it's own chapter, which I find excessive. The chapter on configuration merely lists the most common properties for configuring NHibernate, both in XML and in code. Querying only talks about loading by ID (using Get, not Load) and using Criteria API, on which a paging example is presented as well as some common filtering options (property equals/like/between to, no examples on conjunction/disjunction, however). There's a chapter fully dedicated to ASP.NET, which explains how we can use NHibernate in web applications. It basically talks about ASP.NET concepts, though. Following it, another chapter explains how we can build our own ASP.NET providers using NHibernate (Membership, Role). The available entity generators for NHibernate are referred and evaluated on a chapter of their own, the list is fine (CodeSmith, nhib-gen, AjGenesis, Visual NHibernate, MyGeneration, NGen, NHModeler, Microsoft T4 (?) and hbm2net), examples are provided whenever possible, however, I have some problems with some of the evaluations: for example, Visual NHibernate scores 5 out of 5 on Visual Studio integration, which simply does not exist! I suspect the author means to say that it can be launched from inside Visual Studio, but then, what can't? Finally, there's a chapter I really don't understand. It seems like a bag where a lot of things are thrown in, like NHibernate Burrow (which actually isn't explained at all), Blog.Net components, CSS template conversion and web.config settings related to the maximum request length for file uploads and ending with XML configuration, with the help of GhostDoc. Like I said, the book is only good for absolute beginners, it does a fair job in explaining the very basics, but lack a lot of not-so-basic concepts. Among other things, it lacks: Inheritance mapping strategies (table per class hierarchy, table per class, table per concrete class) Load versus Get usage Other usefull ISession methods First level cache (Identity Map pattern) Other collection types other that Bag (Set, List, Map, IdBag, etc Fetch options User Types Filters Named queries LINQ examples HQL examples And that's it! I hope you find this review useful. The link to the book site is https://www.packtpub.com/nhibernate-2-x-beginners-guide/book

    Read the article

  • Add a Scrollable Multi-Row Bookmarks Toolbar to Firefox

    - by Asian Angel
    If you keep a lot of bookmarks available in your Bookmarks Toolbar then you know that accessing some of them is not as easy as you would like. Now you can simplify the access process with the Multirow Bookmarks Toolbar for Firefox. Before As you can see it has not taken long to fill up our “Bookmarks Toolbar” and use of the drop-down list is required. If you do not keep too many bookmarks in the “Bookmarks Toolbar” then that may not be a bad thing but what if you have a very large number of bookmarks there? Multirow Bookmarks Toolbar in Action As soon as you have installed the extension and restarted Firefox you will see the default three rows display. If you are not worried about UI space then you are good to go. Those of you who like keeping the UI space to a minimum will want to have a look at this next part… You are not locked into a “three rows setup” with this extension. If you are ok with two rows then you can select for that in the “Options” and and enjoy a mini scrollbar on the right side. For our example we still had easy access to all three rows. Two rows still too much? Not a problem. Set the number of rows for one only in the “Options” and still enjoy that scrolling goodness. If you do select for one row only do not panic when you do not see a scrollbar…it is still there. Hold your mouse over where the scrollbar is shown in the image above and use your middle mouse button to scroll through the multiple rows. You can see the transition between the second and third rows on our browser here… Nice, huh? Options The “Options” are extremely easy to work with…just enable/disable the extension here and set the number of rows that you want visible. Conclusion While the Multirow Bookmarks Toolbar extension may not seem like much at first glance it does provide some nice flexibility for your “Bookmarks Toolbar”. You can save space and access your bookmarks easily without those drop-down lists. If you are looking for another great way to make the best use of the space available in your “Bookmarks Toolbar” then be sure to read our article on the Smart Bookmarks Bar extension for Firefox here. Links Download the Multirow Bookmarks Toolbar extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Reduce Your Bookmarks Toolbar to a Toolbar ButtonConserve Space in Firefox by Combining ToolbarsAdd the Bookmarks Menu to Your Bookmarks Toolbar with Bookmarks UI ConsolidatorAdd a Vertical Bookmarks Toolbar to FirefoxCondense the Bookmarks in the Firefox Bookmarks Toolbar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Dark Side of the Moon (8-bit) Norwegian Life If Web Browsers Were Modes of Transportation Google Translate (for animals) Out of 100 Tweeters Roadkill’s Scan Port scans for open ports

    Read the article

  • Google chrome cannot be installed

    - by Zxy
    I downloaded latest version of google chrome and then tried to install it. However it gave me errors. I searched through the net and noticed that most of the people's problem solved when they installed missing dependecies. Therefore I tried to install them too but seems like it does not work. zero@ubuntu:~/Downloads$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: google-chrome-stable:i386 0 upgraded, 0 newly installed, 1 to remove and 23 not upgraded. 1 not fully installed or removed. After this operation, 116 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 169296 files and directories currently installed.) Removing google-chrome-stable:i386 ... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... zero@ubuntu:~/Downloads$ sudo dpkg -i google-chrome-stable_current_i386.deb Selecting previously unselected package google-chrome-stable:i386. (Reading database ... 169201 files and directories currently installed.) Unpacking google-chrome-stable:i386 (from google-chrome-stable_current_i386.deb) ... dpkg: dependency problems prevent configuration of google-chrome-stable:i386: google-chrome-stable:i386 depends on xdg-utils (>= 1.0.2). dpkg: error processing google-chrome-stable:i386 (--install): dependency problems - leaving unconfigured Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Errors were encountered while processing: google-chrome-stable:i386 Could you please help me? Thanks.

    Read the article

  • URL Routing in ASP.NET 4.0

    In the .NET Framework 3.5 SP1, Microsoft introduced ASP.NET Routing, which decouples the URL of a resource from the physical file on the web server. With ASP.NET Routing you, the developer, define routing rules map route patterns to a class that generates the content. For example, you might indicate that the URL Categories/CategoryName maps to a class that takes the CategoryName and generates HTML that lists that category's products in a grid. With such a mapping, users could view products for the Beverages category by visiting www.yoursite.com/Categories/Beverages. In .NET 3.5 SP1, ASP.NET Routing was primarily designed for ASP.NET MVC applications, although as discussed in Using ASP.NET Routing Without ASP.NET MVC it is possible to implement ASP.NET Routing in a Web Forms application, as well. However, implementing ASP.NET Routing in a Web Forms application involves a bit of seemingly excessive legwork. In a Web Forms scenario we typically want to map a routing pattern to an actual ASP.NET page. To do so we need to create a route handler class that is invoked when the routing URL is requested and, in a sense, dispatches the request to the appropriate ASP.NET page. For instance, to map a route to a physical file, such as mapping Categories/CategoryName to ShowProductsByCategory.aspx - requires three steps: (1) Define the mapping in Global.asax, which maps a route pattern to a route handler class; (2) Create the route handler class, which is responsible for parsing the URL, storing any route parameters into some location that is accessible to the target page (such as HttpContext.Items), and returning an instance of the target page or HTTP Handler that handles the requested route; and (3) writing code in the target page to grab the route parameters and use them in rendering its content. Given how much effort it took to just read the preceding sentence (let alone write it) you can imagine that implementing ASP.NET Routing in a Web Forms application is not necessarily the most straightforward task. The good news is that ASP.NET 4.0 has greatly simplified ASP.NET Routing for Web Form applications by adding a number of classes and helper methods that can be used to encapsulate the aforementioned complexity. With ASP.NET 4.0 it's easier to define the routing rules and there's no need to create a custom route handling class. This article details these enhancements. Read on to learn more! Read More >

    Read the article

  • Oracle Service Registry 11gR1 Support for Oracle Fusion Middleware/SOA Suite 11g PatchSet 2

    - by Dave Berry
    As you might be aware, a few days back we released Patchset 2 (PS2) for several products in the Oracle Fusion Middleware 11g Release 1 stack including WebLogic Server and SOA Suite. Though there was no patchset released for Oracle Service Registry (OSR) 11g, being an integral part of Fusion Middleware & SOA, OSR 11g R1 ( 11.1.1.2 ) is fully certified with this release. Below is some recommended reading before installing OSR 11g with the new PS2 : OSR 11g R1 & SOA Suite 11g PS2 in a Shared WebLogic Domain If you intend to deploy OSR 11g in the same domain as the SOA Suite 11g, the primary recommendation is to install OSR 11g in its own Managed Server within the same Weblogic Domain as the SOA Suite, as the following diagram depicts : An important pre-requisite for this setup is to apply Patch 9499508, after installation. It basically replaces a registry library - wasp.jar - in the registry application deployed on your server, so as to enable co-deployment of OSR 11g & SOA Suite 11g in the same WLS Domain. The patch fixes a java.lang.LinkageError: loader constraint violation that appears in your OSR system log and is now available for download. The second, equally important, pre-requisite is to modify the setDomainEnv.sh/.cmd file for your WebLogic Domain to conditionally set the CLASSPATH so that the oracle.soa.fabric.jar library is not included in it for the Managed Server(s) hosting OSR 11g. Both these pre-requisites and other OSR 11g Topology Best Practices are covered in detail in the new Knowledge Base article Oracle Service Registry 11g Topology : Best Practices. Architecting an OSR 11g High Availability Setup Typically you would want to create a High Availability (HA) OSR 11g setup, especially on your production system. The following illustrates the recommended topology. The article, Hands-on Guide to Creating an Oracle Service Registry 11g High-Availability Setup on Oracle WebLogic Server 11g on OTN provides step-by-step instructions for creating such an active-active HA setup of multiple OSR 11g nodes with a Load Balancer in an Oracle WebLogic Server cluster environment. Additional Info The OSR Home Page on OTN is the hub for OSR and is regularly updated with latest information, articles, white papers etc. For further reading, this FAQ answers some common questions on OSR. The OSR Certification Matrix lists the Application Servers, Databases, Artifact Storage Tools, Web Browsers, IDEs, etc... that OSR 11g is certified against. If you hit any problems during OSR 11g installation, design time or runtime, the first place to look into is the logs. To find more details about which logs to check when & where, take a look at Where to find Oracle Service Registry Logs? Finally, if you have any questions or problems, there are various ways to reach us - on the SOA Governance forum on OTN, on the Community Forums or by contacting Oracle Support. Yogesh Sontakke and Dave Berry

    Read the article

  • E: mkinitramfs failure cpio 141 gzip 1

    - by Nagaraj Shindagi
    I'm using Ubuntu 12.04 LTS with Dell power-edge R720 server, facing the problem when I apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up linux-image-3.2.0-37-generic-pae (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic-pae Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic-pae /boot/vmlinuz-3.2.0-37-generic-pae update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic-pae gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic-pae with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0 -37-generic-pae.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-37-generic-pae; however: Package linux-image-3.2.0-37-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup erro r from a previous failure. Errors were encountered while processing: linux-image-3.2.0-37-generic-pae linux-image-generic-pae E: Sub-process /usr/bin/dpkg returned an error code (1) ------------ even i tried with apt-get clean apt-get remove apt-get autoremove apt-get purge there is no difference it will show the same error message as above, even i checked the disk space ----------- Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda6 24030076 612456 22196964 3% / udev 16536644 4 16536640 1% /dev tmpfs 6618884 1164 6617720 1% /run none 5120 0 5120 0% /run/lock none 16547208 72 16547136 1% /run/shm cgroup 16547208 0 16547208 0% /sys/fs/cgroup /dev/sda1 93207 75034 13361 85% /boot /dev/sda10 9611492 1096076 8027176 13% /tmp /dev/sda12 9611492 226340 8896912 3% /opt /dev/sda13 9611492 152516 8970736 2% /srv /dev/sda7 9611492 592208 8531044 7% /home /dev/sda8 9611492 2656736 6466516 30% /usr /dev/sda9 9611492 696468 8426784 8% /var /dev/sda14 961237336 134563516 777845764 15% /usr/data /dev/sda15 618991384 84498388 503050052 15% /usr/data1 /dev/sda11 9611492 152616 8970636 2% /usr/local --------------- is there any problem on allotting the space to the partiations please let me know the solution its on urgent please help me on this issue regards

    Read the article

  • Silverlight Cream for April 02, 2010 -- #828

    - by Dave Campbell
    In this Issue: Phil Middlemiss, Robert Kozak, Kathleen Dollard, Avi Pilosof, Nokola, Jeff Wilcox, David Anson, Timmy Kokke, Tim Greenfield, and Josh Smith. Shoutout: SmartyP has additional info up on his WP7 Pivot app: Preview of My Current Windows Phone 7 Pivot Work From SilverlightCream.com: A Chrome and Glass Theme - Part I Phil Middlemiss is starting a tutorial series on building a new theme for Silverlight, in this first one we define some gradients and color resources... good stuff Phil Intercepting INotifyPropertyChanged This is Robert Kozak's first post on this blog, but it's a good one about INotifyPropertyChanged and MVVM and has a solution in the post with lots of code and discussion. How do I Display Data of Complex Bound Criteria in Horizontal Lists in Silverlight? Kathleen Dollard's latest article in Visual Studio magazine is in answer to a question about displaying a list of complex bound criteria including data, child data, and photos, and displaying them horizontally one at a time. Very nice-looking result, and all the code. Windows Phone: Frame/Page navigation and transitions using the TransitioningContentControl Avi Pilosof discusses the built-in (boring) navigation on WP7, and then shows using the TransitionContentControl from the Toolkit to apply transitions to the navigation. EasyPainter: Cloud Turbulence and Particle Buzz Nokola returns with a couple more effects for EasyPainter: Cloud Turbulence and Particle Buzz ... check out the example screenshots, then go grab the code. Property change notifications for multithreaded Silverlight applications Jeff Wilcox is discussing the need for getting change notifications to always happen on the UI thread in multi-threaded apps... great diagrams to see what's going on. Tip: The default value of a DependencyProperty is shared by all instances of the class that registers it David Anson has a tip up about setting the default value of a DependencyProperty, and the consequence that may have depending upon the type. Building a “real” extension for Expression Blend Timmy Kokke's code is WPF, but the subject is near and dear to us all, Timmy has a real-world Expression Blend extension up... a search for controls in the Objects and Timelines pane ... and even if that doesn't interest you... it's the source to a Blend extension! XPath support in Silverlight 4 + XPathPad Tim Greenfield not only talks about XPath in SL4RC, but he has produced a tool, XPathPad, and provided the source... if you've used XPath, you either are a higher thinker than me(not a big stretch), or you need this :) Using a Service Locator to Work with MessageBoxes in an MVVM Application Josh Smith posted about a question that comes up a lot: showing a messagebox from a ViewModel object. This might not work for custom message boxes or unit testing. This post covers the Unit Testing aspect. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How To: LIC of India Online Policy Payments And Status Enquiries

    - by Kavitha
    Life Insurance Corporation (LIC) of India is the largest state-owned insurance company in India and also the country’s largest investor. The premium  amount for the insurance policies purchased from LIC are paid by visiting the nearest LIC office or by taking help of LIC agents. It’s a time consuming process and most of us are fed up of standing in long queues at LIC offices for paying premium amount. LIC Online Services Website The worries are not any more, no need to stand in a long queue or approach an agent for paying your LIC policies. LIC of India has an online payment and also renewal facility : http://licindia.in. To pay the policies online we have to register with LIC and login to the site using the registered username and password. Once you login, you can enter your profile information and LIC policies that are purchased on your name(register the policies that are purchased  only on your name, otherwise you land in to troubles). Once registered, managing activities of like payments, loan eligibility checking, policy maturity, etc. are very easy. For online payment of policies you can find Pay Premium Online tab which when clicked takes you to a page that lists all the policies that are due. Payments can be made using credit/debit cards and online banking systems. Almost all the Indian banks are covered as part of the online payment system. Other services that are available through the online system of LIC are : View ULIP Policies,Premium Calendar, Calculate Loan Eligibility, Revival Quote, Policy Maturity, Address Change Requests, etc. LIC Policy Status Enquiry Through Phone LIC also has a helpline/customer care  number ‘1251‘. You can call 1251 to know about  your policy status, premium due date, Loan possibility and loan amount possible, time of maturity etc. This article titled,How To: LIC of India Online Policy Payments And Status Enquiries, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Translate report data export from RUEI into HTML for import into OpenOffice Calc Spreadsheets

    - by [email protected]
    A common question of users is, How to import the data from the automated data export of Real User Experience Insight (RUEI) into tools for archiving, dashboarding or combination with other sets of data.XML is well-suited for such a translation via the companion Extensible Stylesheet Language Transformations (XSLT). Basically XSLT utilizes XSL, a template on what to read from your input XML data file and where to place it into the target document. The target document can be anything you like, i.e. XHTML, CSV, or even a OpenOffice Spreadsheet, etc. as long as it is a plain text format.XML 2 OpenOffice.org SpreadsheetFor the XSLT to work as an OpenOffice.org Calc Import Filter:How to add an XML Import Filter to OpenOffice CalcStart OpenOffice.org Calc andselect Tools > XML Filter SettingsNew...Fill in the details as follows:Filter name: RUEI Import filterApplication: OpenOffice.org Calc (.ods)Name of file type: Oracle Real User Experience InsightFile extension: xmlSwitch to the transformation tab and enter/select the following leaving the rest untouchedXSLT for import: ruei_report_data_import_filter.xslPlease see at the end of this blog post for a download of the referenced file.Select RUEI Import filter from list and Test XSLTClick on Browse to selectTransform file: export.php.xmlOpenOffice.org Calc will transform and load the XML file you retrieved from RUEI in a human-readable format.You can now select File > Open... and change the filetype to open your RUEI exports directly in OpenOffice.org Calc, just like any other a native Spreadsheet format.Files of type: Oracle Real User Experience Insight (*.xml)File name: export.php.xml XML 2 XHTMLMost XML-powered browsers provides for inherent XSL Transformation capabilities, you only have to reference the XSLT Stylesheet in the head of your XML file. Then open the file in your favourite Web Browser, Firefox, Opera, Safari or Internet Explorer alike.<?xml version="1.0" encoding="ISO-8859-1"?><!-- inserted line below --> <?xml-stylesheet type="text/xsl" href="ruei_report_data_export_2_xhtml.xsl"?><!-- inserted line above --><report>You can find a patched example export from RUEI plus the above referenced XSL-Stylesheets here: export.php.xml - Example report data export from RUEI ruei_report_data_export_2_xhtml.xsl - RUEI to XHTML XSL Transformation Stylesheetruei_report_data_import_filter.xsl - OpenOffice.org XML import filter for RUEI report export data If you would like to do things like this on the command line you can use either Xalan or xsltproc.The basic command syntax for xsltproc is very simple:xsltproc -o output.file stylesheet.xslt inputfile.xmlYou can use this with the above two stylesheets to translate RUEI Data Exports into XHTML and/or OpenOffice.org Calc ODS-Format. Or you could write your own XSLT to transform into Comma separated Value lists.Please let me know what you think or do with this information in the comments below.Kind regards,Stefan ThiemeReferences used:OpenOffice XML Filter - Create XSLT filters for import and export - http://user.services.openoffice.org/en/forum/viewtopic.php?f=45&t=3490SUN OpenOffice.org XML File Format 1.0 - http://xml.openoffice.org/xml_specification.pdf

    Read the article

  • How to recover dpkg from corrupted downloads?

    - by rocker9455
    I just upgraded to the 10.10 RC earlier and had a few problems with graphic drivers (x didnt start) But i have remedied that now. When i run 'sudo apt-get install -f' i get this: will@UbuntuBox:/mnt/slax$ sudo apt-get install -f [sudo] password for will: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libmono-wcf3.0-cil openoffice.org-calc openoffice.org-core The following NEW packages will be installed: libmono-wcf3.0-cil The following packages will be upgraded: openoffice.org-calc openoffice.org-core 2 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 16 not fully installed or removed. Need to get 0B/32.5MB of archives. After this operation, 1,929kB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 201565 files and directories currently installed.) Preparing to replace openoffice.org-calc 1:3.2.1-6ubuntu2~10.04.1 (using .../openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb) ... Unpacking replacement openoffice.org-calc ... xz: (stdin): Compressed data is corrupt dpkg-deb: subprocess <decompress> returned error exit status 1 dpkg: error processing /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): short read on buffer copy for backend dpkg-deb during `./usr/lib/openoffice/basis3.2/program/libscfiltli.so' dpkg: regarding .../openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb containing openoffice.org-core: openoffice.org-core conflicts with openoffice.org-calc (<< 1:3.2.1-7ubuntu1) openoffice.org-calc (version 1:3.2.1-6ubuntu2~10.04.1) is present and installed. dpkg: error processing /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb (--unpack): conflicting packages - not installing openoffice.org-core Unpacking libmono-wcf3.0-cil (from .../libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb) ... dpkg-deb (subprocess): data: internal gzip read error: '<fd:0>: data error' dpkg-deb: subprocess <decompress> returned error exit status 2 dpkg: error processing /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb (--unpack): subprocess dpkg-deb --fsys-tarfile returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/openoffice.org-calc_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/openoffice.org-core_1%3a3.2.1-7ubuntu1_i386.deb /var/cache/apt/archives/libmono-wcf3.0-cil_2.6.7-3ubuntu1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Any idea how i can get the broken packages fixed? Cheers, Will

    Read the article

  • How to perform feature upgrade in SharePoint2010 part2

    - by ybbest
    In my last post, I showed you how to perform feature upgrade and upgrade my feature from 0.0.0.0 to 1.0.0.1. In this post, I’d like to continue on this topic and upgrade the feature again. For the first version of my solution, I deploy a document library with a custom document set content type and then upgrade the solution so I index the application number column. Now , I will create a new version of the solution so that it will remove the threshold of the list. You can download the solution here. Once you extract your solution, the first version is in the original folder. In order to deploy the original solution, you need to run the sitecreation.ps1 in the script folder. The version 1.1 will be in the Upgrade folder and version 1.2 will be in the Upgrade2 folder. You need to make the following changes to the existing solution. 1. Modify the ApplicationLibrary.Template.xml as highlighted below: 2. Adding the following code into the feature event receiver. </pre> public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters) { base.FeatureUpgrading(properties, upgradeActionName, parameters); SPWeb web = GetFeatureWeb(properties); SPList applicationLibrary = web.Lists.TryGetList(ApplicationLibraryNamesConstant.ApplicationLibraryName); switch (upgradeActionName) { case "IndexApplicationNumber": if (applicationLibrary != null) { SPField queueField = applicationLibrary.Fields["ApplicationNumber"]; queueField.Indexed = true; queueField.Update(); } break; case "RemoveListThreshold": applicationLibrary.EnableThrottling = false; applicationLibrary.Update(); break; } } <pre> 3. Package your solution and run the feature upgrade PowerShell script. $wspFolder ="v1.2" $scriptPath=Split-Path $myInvocation.MyCommand.Path $siteUrl = "http://ybbest" $featureToCheckGuid="1b9d84cd-227d-45f1-92d4-a43008aa8fe7" $requiredFeatureVersion="1.0.0.1" $siteUrlOfFeatureToBeChecked="http://ybbest" AppendLog "Starting Solution UpgradeSolutionAndFeatures.ps1" Magenta & "$scriptPath\UpgradeSolutionAndFeatures.ps1" $siteUrl $wspFolder $featureToCheckGuid $requiredFeatureVersion $siteUrlOfFeatureToBeChecked Write-Host AppendLog "All features updated" "Green" References: Feature upgrade.

    Read the article

  • How do I resolve not fully installed package (python3-setuptools)?

    - by user3737693
    I was trying to install python3-setuptools, and when i run $ sudo apt-get install python3-setuptools I get this error: Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up python3-setuptools (0.6.34-0ubuntu1) ... Traceback (most recent call last): File "/usr/bin/py3compile", line 36, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error processing python3-setuptools (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: python3-setuptools E: Sub-process /usr/bin/dpkg returned an error code (1) I tried apt-get clean, apt-get autoclean, apt-get remove python3-setuptools, dpkg --remove python3-setuptools, apt-get install -f, dpkg -P --force-remove-reinstreq, dpkg -P --force-all --force-remove-reinstreq and dpkg --purge, but none of them worked. Output of sudo dpkg -P --force-all --force-remove-reinstreq python3-setuptools (Reading database ... 225309 files and directories currently installed.) Removing python3-setuptools ... Traceback (most recent call last): File "/usr/bin/py3clean", line 32, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error processing python3-setuptools (--purge): subprocess installed pre-removal script returned error exit status 1 Traceback (most recent call last): File "/usr/bin/py3compile", line 36, in <module> from debpython import files as dpf File "/usr/share/python3/debpython/files.py", line 25, in <module> from debpython.pydist import PUBLIC_DIR_RE File "/usr/share/python3/debpython/pydist.py", line 28, in <module> from debpython.tools import memoize File "/usr/share/python3/debpython/tools.py", line 25, in <module> from datetime import datetime ImportError: /usr/bin/datetime.so: undefined symbol: _Py_ZeroStruct dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: python3-setuptools

    Read the article

  • The Minimalist Approach to Content Governance - Retire Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. Good news - the Retire Phase is actually more fun than the Manage Phase. During the Retire Phase our content management team should not have to track down content creators if the Request Phase of this process was completed successfully. The ownership meta data, success criteria and time stamp that was applied to the original content submission will help to manage content at the end of the content life cycle. The Retire Phase will provide the opportunity for us to prune irrelevant content items through archiving or deletion, keeping the content system clear of irrelevant information, streamlining users ability to browse and search for content.   1. Act on Metrics Established during the Request Phase Why - Some information is only relevant for a given amount of time. In Content Platform Migration Strategy - Artifacts vs Perishable Content we examined two content types - Artifacts and Perishable content. Understanding the differences between Artifacts and Perishable content will allow us to explicitly respect their various lifespans. Additionally, some content may have been part of a project that failed to meet the success criteria outlined in the Request Phase. Any content that did not meet the metrics outlined in the Request Phase should be considered for deletion. How - Thankfully by adhering to to The Minimalist Approach to Content Governance our content should have some level of meta data associated with it that will allow us to quickly sort and understand how to deal with it. Content Management Systems like Oracle's Universal Content Management (UCM) natively allow you to create and save advanced searches that can use content meta data like folders, author, expiration date, security settings and custom meta data to pull back listings of content for examination. Additionally, analytics are available for all content items that allow us to determine if the usage is meeting success criteria that may have been previously outlined during the request phase. The lists that are produced from these approaches can be quickly reviewed for each project with the content owners and based on the nature of the content and success criteria undergo archiving or deletion. Impact - Retiring content that is no longer relevant will allow end users to have fast and relevant access to information across your enterprise. As we mentioned in our first post in this series - it is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year. The light level of effort that was placed into the Request Phase of this process will set us up to keep content clean and relevant for a long time to come. With an up-to-date content repository users will be able to quickly find access to the information that is critical to their work processes. You might not get a holiday named in your honor managing the content system, but will appreciate their quick access to quality information.

    Read the article

  • How to perform feature upgrade in SharePoint2010 part1

    - by ybbest
    Once your custom SharePoint solution went into production. Any changes made to it require you to perform feature upgrade. Today, I’d like to show you how to perform feature upgrade. For the first version of my solution, I deploy a document library with a custom document set content type. You can download the solution here. Once you extract your solution, the first version is in the original folder. In order to deploy the original solution, you need to run the sitecreation.ps1 in the script folder. Next, I will modify the solution so that I will index the application number in the document library I just created in my original solution. 1. Modify the ApplicationLibrary.Template.xml as highlighted below: 2. Adding the following code into the feature event receiver. public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters) { base.FeatureUpgrading(properties, upgradeActionName, parameters); SPWeb web = GetFeatureWeb(properties); switch (upgradeActionName) { case "IndexApplicationNumber": SPList applicationLibrary = web.Lists.TryGetList(ApplicationLibraryNamesConstant.ApplicationLibraryName); if (applicationLibrary != null) { SPField queueField = applicationLibrary.Fields["ApplicationNumber"]; queueField.Indexed = true; queueField.Update(); } break; } } 3. Package your solution and run the feature upgrade PowerShell script. $wspFolder ="v1.1" $scriptPath=Split-Path $myInvocation.MyCommand.Path $siteUrl = "http://ybbest" $featureToCheckGuid="1b9d84cd-227d-45f1-92d4-a43008aa8fe7" $requiredFeatureVersion="0.0.0.0" $siteUrlOfFeatureToBeChecked="http://ybbest" AppendLog "Starting Solution UpgradeSolutionAndFeatures.ps1" Magenta & "$scriptPath\UpgradeSolutionAndFeatures.ps1" $siteUrl $wspFolder $featureToCheckGuid $requiredFeatureVersion $siteUrlOfFeatureToBeChecked Write-Host AppendLog "All features updated" "Green" Note: If you have not version your feature explicitly , your feature version will be 0.0.0.0 . References: Feature upgrade.

    Read the article

  • 10 steps to enable &lsquo;Anonymous Access&rsquo; for your SharePoint 2010 site

    - by KunaalKapoor
    What’s Anonymous Access? Anonymous access to your SharePoint site enables all visitors to view your SharePoint site anonymously without having to log in. With this blog I’d like to go through an easy step wise procedure to enable/set up anonymous access. Before you actually enable anonymous access on the site, you’ll have to change some settings at the web app level. So let’s start with that: Prerequisite(s): 1. A hosted SharePoint 2010 farm/server. 2. An existing SharePoint site. I just thought I’d mention the above pre-reqs, since the steps mentioned below would’nt be valid or a different type of a site. Step 1: In Central Administration, under Application Management, click on the Manage web applications. Step 2: Now select the site you want to enable anonymous access and click on the Authentication Providers icon. Step 3: On the modal window click on the Default zone. Step 4: Now under the Edit Authentication section, check Enable anonymous access and click Save. This is basically to make the Anonymous Access authentication mechanism available at the web app level @ IIS. Now, web application will allow anonymous access to be set. 5. Going back to Web Application Management click on the Anonymous Policy icon. Step 6: Also before we proceed any further, under the Anonymous Access Restrictions (@ web app mgmt.) select your Zone and set the Permissions to None – No policy and click Save. Step 7:  Now lets navigate to your top level site collection for the web application. Click the Site Actions > Site Settings. Under Users and Permissions click Site permissions. Step 8: Under Users and Permissions, click on Site Permissions. Step 9: Under the Edit tab, click on Anonymous Access. Step 10: Choose whether you want Anonymous users to have access to the entire Web site or to lists and libraries only, and then click on OK. You should now be able to see the view as below under your permissions Also keep in mind: If you are trying to access the site from a browser within the domain, then you’ll need to change some browser settings to see the after affects. Normally this is because the browsers (Internet Explorer) is set to log in automatically to intranet zone only , not sure if you have explicitly changed the zones and added it to trusted sites. If this is from a box within your domain please try to access the site by temporarily changing the Internet Explorer setting to Anonymous Logon on the zone that the site is added example "Intranet" and try . You will find the same settings by clicking on Tools > Internet Options > Security Tab.

    Read the article

  • URL Routing in ASP.NET 4.0

    In the .NET Framework 3.5 SP1, Microsoft introduced ASP.NET Routing, which decouples the URL of a resource from the physical file on the web server. With ASP.NET Routing you, the developer, define routing rules map route patterns to a class that generates the content. For example, you might indicate that the URL Categories/CategoryName maps to a class that takes the CategoryName and generates HTML that lists that category's products in a grid. With such a mapping, users could view products for the Beverages category by visiting www.yoursite.com/Categories/Beverages. In .NET 3.5 SP1, ASP.NET Routing was primarily designed for ASP.NET MVC applications, although as discussed in Using ASP.NET Routing Without ASP.NET MVC it is possible to implement ASP.NET Routing in a Web Forms application, as well. However, implementing ASP.NET Routing in a Web Forms application involves a bit of seemingly excessive legwork. In a Web Forms scenario we typically want to map a routing pattern to an actual ASP.NET page. To do so we need to create a route handler class that is invoked when the routing URL is requested and, in a sense, dispatches the request to the appropriate ASP.NET page. For instance, to map a route to a physical file, such as mapping Categories/CategoryName to ShowProductsByCategory.aspx - requires three steps: (1) Define the mapping in Global.asax, which maps a route pattern to a route handler class; (2) Create the route handler class, which is responsible for parsing the URL, storing any route parameters into some location that is accessible to the target page (such as HttpContext.Items), and returning an instance of the target page or HTTP Handler that handles the requested route; and (3) writing code in the target page to grab the route parameters and use them in rendering its content. Given how much effort it took to just read the preceding sentence (let alone write it) you can imagine that implementing ASP.NET Routing in a Web Forms application is not necessarily the most straightforward task. The good news is that ASP.NET 4.0 has greatly simplified ASP.NET Routing for Web Form applications by adding a number of classes and helper methods that can be used to encapsulate the aforementioned complexity. With ASP.NET 4.0 it's easier to define the routing rules and there's no need to create a custom route handling class. This article details these enhancements. Read on to learn more! Read More >

    Read the article

  • Mathematica Programming Language&ndash;An Introduction

    - by JoshReuben
    The Mathematica http://www.wolfram.com/mathematica/ programming model consists of a kernel computation engine (or grid of such engines) and a front-end of notebook instances that communicate with the kernel throughout a session. The programming model of Mathematica is incredibly rich & powerful – besides numeric calculations, it supports symbols (eg Pi, I, E) and control flow logic.   obviously I could use this as a simple calculator: 5 * 10 --> 50 but this language is much more than that!   for example, I could use control flow logic & setup a simple infinite loop: x=1; While [x>0, x=x,x+1] Different brackets have different purposes: square brackets for function arguments:  Cos[x] round brackets for grouping: (1+2)*3 curly brackets for lists: {1,2,3,4} The power of Mathematica (as opposed to say Matlab) is that it gives exact symbolic answers instead of a rounded numeric approximation (unless you request it):   Mathematica lets you define scoped variables (symbols): a=1; b=2; c=a+b --> 5 these variables can contain symbolic values – you can think of these as partially computed functions:   use Clear[x] or Remove[x] to zero or dereference a variable.   To compute a numerical approximation to n significant digits (default n=6), use N[x,n] or the //N prefix: Pi //N -->3.14159 N[Pi,50] --> 3.1415926535897932384626433832795028841971693993751 The kernel uses % to reference the lastcalculation result, %% the 2nd last, %%% the 3rd last etc –> clearer statements: eg instead of: Sqrt[Pi+Sqrt[Sqrt[Pi+Sqrt[Pi]]] do: Sqrt[Pi]; Sqrt[Pi+%]; Sqrt[Pi+%] The help system supports wildcards, so I can search for functions like so: ?Inv* Mathematica supports some very powerful programming constructs and a rich function library that allow you to do things that you would have to write allot of code for in a language like C++.   the Factor function – factorization: Factor[x^3 – 6*x^2 +11x – 6] --> (-3+x) (-2+x) (-1+x)   the Solve function – find the roots of an equation: Solve[x^3 – 2x + 1 == 0] -->   the Expand function – express (1+x)^10 in polynomial form: Expand[(1+x)^10] --> 1+10x+45x^2+120x^3+210x^4+252x^5+210x^6+120x^7+45x^8+10x^9+x^10 the Prime function – what is the 1000th prime? Prime[1000] -->7919 Mathematica also has some powerful graphics capabilities:   the Plot function – plot the graph of y=Sin x in a single period: Plot[Sin[x], {x,0,2*Pi}] you can also plot 3D surfaces of functions using Plot3D function

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >