Search Results

Search found 8703 results on 349 pages for 'cache money'.

Page 87/349 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How to handle monetary values in PHP and MySql?

    - by Songo
    I've inherited a huge pile of legacy code written in PHP on top of a MySQL database. The thing I noticed is that the application uses doubles for storage and manipulation of data. Now I came across of numerous posts mentioning how double are not suited for monetary operations because of the rounding errors. However, I have yet to come across a complete solution to how monetary values should be handled in PHP code and stored in a MySQL database. Is there a best practice when it comes to handling money specifically in PHP? Things I'm looking for are: How should the data be stored in the database? column type? size? How should the data be handling in normal addition, subtraction. multiplication or division? When should I round the values? How much rounding is acceptable if any? Is there a difference between handling large monetary values and low ones? Note: A VERY simplified sample code of how I might encounter money values in everyday life: $a= $_POST['price_in_dollars']; //-->(ex: 25.06) will be read as a string should it be cast to double? $b= $_POST['discount_rate'];//-->(ex: 0.35) value will always be less than 1 $valueToBeStored= $a * $b; //--> any hint here is welcomed $valueFromDatabase= $row['price']; //--> price column in database could be double, decimal,...etc. $priceToPrint=$valueFromDatabase * 0.25; //again cast needed or not? I hope you use this sample code as a means to bring out more use cases and not to take it literally of course. Bonus Question If I'm to use an ORM such as Doctrine or PROPEL, how different will it be to use money in my code.

    Read the article

  • Can I make a business in teaching home users Ubuntu [on hold]

    - by Dorgaldir
    I was thinking about a way to bring Ubuntu to the bigger public, since it has great advantages for people in the lower income class that only use a PC for basic usage. They pay for a windows licence without actually needing windows because 95% of what they do is in a browser and the other 5% is typing a word document or making a simpel Excel sheet. So for these people something like Ubuntu is ideal, they can prolong the life of their old PC or laptop with Ubuntu and thus saving extra money. And as we all know, saving money is not only interesting for the lowest of income but for most of us. But when I talk to people they don't want to use Ubuntu because they know Windows and they don't know this, they'll complain about having to adapt to windows 8 but adapting to Ubuntu seems a bridge too far. But what if someone in the neighborhood gave simple Ubuntu courses. Teaching people about Ubuntu, stuff like: What is an OS What is Ubuntu How do I obtain Ubuntu How do I install Ubuntu How do I set up my email in Ubuntu How do I make a text document in Ubuntu How do I update my facebook wall in Ubuntu ... Simple basic PC usage, but within Ubuntu. But as much as I would like to work for free all day, I can't do this for free for people outside of my social circle. So I was wondering if it is possible to make a business and make money with giving Ubuntu courses, or are their steps to be taken before this is possible. However... Do I need an Ubuntu or Canonical license? Do I need to get a certificate? Do I have to make some kind of deal or contract with Canonical? Just to be clear this is all just an idea in my head at this point, I'm just gathering information. I'm not a teacher at a school, just a programmer that is thinking about options in life. Thanks in advance!

    Read the article

  • who can tell me the rules of extra 100% bonus for swtor credits in swtor2credits?

    - by user46860
    During Father's Day, you can buy swtor credits with 50% OFF! swtor2credits.com offer swtor players with extra 100% bonus for swtor credits, when you spend the same money as before, you can get double swtor credits! Rules for extra 100% bonus for swtor credits. 1. From June 16 to June 18, 2014, 02:00-03:00 a.m. GMT, is the ONLY valid time for getting double swtor credits at swtor2credits. 2. The total sum of your order is valued $10 at most. Beyond this money, please apply discount code you know to save extra money. 3. Everyone has only one chance to get double swtor credits at swtor2credits during our promotion. 4. As long as your order has used extra discount code or voucher, you lose the chance to get exclusive 100% bonus. Please read these activities rules carefully, and don't miss the time! Like Swtor2credits Facebook to Gain Free Cash Coupon, Up to $16 Giveaways for Swtor Credits! 1. Share our facebook posts in your timeline. 2. Leave your preciously constructive suggestion on our facebook page. 3. Share your amazing swtor gaming screenshots on our page. Time: May 29, 2014 to June 12.2014.GMT. https://www.facebook.com/pages/swtor2creditscom/493389160685307 Cheap swtor credits in swtor2credits.com.

    Read the article

  • Reportviewer stored procedure [closed]

    - by Liesl
    I want to write a stored procedure for my invoice reportviewer. After invoice is generated in reportviewer it must also add the data to my Invoice table. This is all my tables in my database: CREATE TABLE [dbo].[Waybills]( [WaybillID] [int] IDENTITY(1,1) NOT NULL, [SenderName] [varchar](50) NULL, [SenderAddress] [varchar](50) NULL, [SenderContact] [int] NULL, [ReceiverName] [varchar](50) NULL, [ReceiverAddress] [varchar](50) NULL, [ReceiverContact] [int] NULL, [UnitDescription] [varchar](50) NULL, [UnitWeight] [int] NULL, [DateReceived] [date] NULL, [Payee] [varchar](50) NULL, [CustomerID] [int] NULL, PRIMARY KEY CLUSTERED CREATE TABLE [dbo].[Customer]( [CustomerID] [int] IDENTITY(1,1) NOT NULL, [customerName] [varchar](30) NULL, [CustomerAddress] [varchar](30) NULL, [CustomerContact] [varchar](30) NULL, [VatNo] [int] NULL, CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED ) CREATE TABLE [dbo].[Cycle]( [CycleID] [int] IDENTITY(1,1) NOT NULL, [CycleNumber] [int] NULL, [StartDate] [date] NULL, [EndDate] [date] NULL ) ON [PRIMARY] CREATE TABLE [dbo].[Payments]( [PaymentID] [int] IDENTITY(1,1) NOT NULL, [Amount] [money] NULL, [PaymentDate] [date] NULL, [CustomerID] [int] NULL, PRIMARY KEY CLUSTERED Create table Invoices ( InvoiceID int IDENTITY(1,1), InvoiceNumber int, InvoiceDate date, BalanceBroughtForward money, OutstandingAmount money, CustomerID int, WaybillID int, PaymentID int, CycleID int PRIMARY KEY (InvoiceID), FOREIGN KEY (CustomerID) REFERENCES Customer(CustomerID), FOREIGN KEY (WaybillID) REFERENCES Waybills(WaybillID), FOREIGN KEY (PaymentID) REFERENCES Payments(PaymentID), FOREIGN KEY (CycleID) REFERENCES Cycle(CycleID) ) I want my sp to find all waybills for specific customer in a specific cycle with payments made from this client. All this data must then be added into the INVOICE table. Can someone please help me or show me on the right direction? create proc GenerateInvoice @StartDate date, @EndDate date, @Payee varchar(30) AS SELECT Waybills.WaybillNumber Waybills.SenderName, Waybills.SenderAddress, Waybills.SenderContact, Waybills.ReceiverName, Waybills.ReceiverAddress, Waybills.ReceiverContact, Waybills.UnitDescription, Waybills.UnitWeight, Waybills.DateReceived, Waybills.Payee, Payments.Amount, Payments.PaymentDate, Cycle.CycleNumber, Cycle.StartDate, Cycle.EndDate FROM Waybills CROSS JOIN Payments CROSS JOIN Cycle WHERE Waybills.ReceiverName = @Payee AND (Waybills.DateReceived BETWEEN (@StartDate) AND (@EndDate)) Insert Into Invoices (InvoiceNumber, InvoiceDate, BalanceBroughtForward, OutstandingAmount) Values (@InvoiceNumber, @InvoiceDate, @BalanceBroughtForward, @ OutstandingAmount) go

    Read the article

  • Tuning Linux IP routing parameters -- secret_interval and tcp_mem

    - by Jeff Atwood
    We had a little failover problem with one of our HAProxy VMs today. When we dug into it, we found this: Jan 26 07:41:45 haproxy2 kernel: [226818.070059] __ratelimit: 10 callbacks suppressed Jan 26 07:41:45 haproxy2 kernel: [226818.070064] Out of socket memory Jan 26 07:41:47 haproxy2 kernel: [226819.560048] Out of socket memory Jan 26 07:41:49 haproxy2 kernel: [226822.030044] Out of socket memory Which, per this link, apparently has to do with low default settings for net.ipv4.tcp_mem. So we increased them by 4x from their defaults (this is Ubuntu Server, not sure if the Linux flavor matters): current values are: 45984 61312 91968 new values are: 183936 245248 367872 After that, we started seeing a bizarre error message: Jan 26 08:18:49 haproxy1 kernel: [ 2291.579726] Route hash chain too long! Jan 26 08:18:49 haproxy1 kernel: [ 2291.579732] Adjust your secret_interval! Shh.. it's a secret!! This apparently has to do with /proc/sys/net/ipv4/route/secret_interval which defaults to 600 and controls periodic flushing of the route cache The secret_interval instructs the kernel how often to blow away ALL route hash entries regardless of how new/old they are. In our environment this is generally bad. The CPU will be busy rebuilding thousands of entries per second every time the cache is cleared. However we set this to run once a day to keep memory leaks at bay (though we've never had one). While we are happy to reduce this, it seems odd to recommend dropping the entire route cache at regular intervals, rather than simply pushing old values out of the route cache faster. After some investigation, we found /proc/sys/net/ipv4/route/gc_elasticity which seems to be a better option for keeping the route table size in check: gc_elasticity can best be described as the average bucket depth the kernel will accept before it starts expiring route hash entries. This will help maintain the upper limit of active routes. We adjusted elasticity from 8 to 4, in the hopes of the route cache pruning itself more aggressively. The secret_interval does not feel correct to us. But there are a bunch of settings and it's unclear which are really the right way to go here. /proc/sys/net/ipv4/route/gc_elasticity (8) /proc/sys/net/ipv4/route/gc_interval (60) /proc/sys/net/ipv4/route/gc_min_interval (0) /proc/sys/net/ipv4/route/gc_timeout (300) /proc/sys/net/ipv4/route/secret_interval (600) /proc/sys/net/ipv4/route/gc_thresh (?) rhash_entries (kernel parameter, default unknown?) We don't want to make the Linux routing worse, so we're kind of afraid to mess with some of these settings. Can anyone advise which routing parameters are best to tune, for a high traffic HAProxy instance?

    Read the article

  • Unusual Apache->Tomcat caching issue.

    - by iftrue
    Right now, I have an Apache setup sitting in front of Tomcat to handle caching. This setup has been given to an external service to manage, and since the transition, I've noticed odd behavior. Specifically, when I request a swf file from the web server, I hit the Apache cache (good), but occasionally I'll receive a truncated file. Once I receive this truncated file, the cache will NOT refresh until I manually delete the cache and let the swf pull down from tomcat again. The external service claims that the configuration is fine, but I don't see any way this could be happening aside from improper configuration. Now, there are two apache and two tomcat servers under a load balancer, and occasionally one apache cache will break while another does not (leading to 50% of all requests getting bad, truncated data). Where should I start looking to debug this issue? What could POSSIBLY be causing this odd behavior? Edit: Inspecting the logs, tomcat throws this: java.io.IOException: Bad file number at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:199) at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at java.io.FilterInputStream.read(FilterInputStream.java:90) at org.apache.catalina.servlets.DefaultServlet.copyRange(DefaultServlet.java:1968) at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:1714) at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:809) at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:325) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:209) at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:347) at org.terracotta.modules.tomcat.tomcat_5_5.SessionValve55.invoke(SessionValve55.java:57) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690) at java.lang.Thread.run(Thread.java:619) followed by access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:00:27:32 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:27:33 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:39:53 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:02:27:38 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - So apache is caching the bad file size. What could possibly be causing this, and possibly separate, how do I ensure that this exception does not get written to cache?

    Read the article

  • Linux: find out what process is using all the RAM?

    - by Timur
    Before actually asking, just to be clear: yes, I know about disk cache, and no, it is not my case :) Sorry, for this preamble :) I'm using CentOS 5. Every application in the system is swapping heavily, and the system is very slow. When I do free -m, here is what I got: total used free shared buffers cached Mem: 3952 3929 22 0 1 18 -/+ buffers/cache: 3909 42 Swap: 16383 46 16337 So, I actually have only 42 Mb to use! As far as I understand, -/+ buffers/cache actually doesn't count the disk cache, so I indeed only have 42 Mb, right? I thought, I might be wrong, so I tried to switch off the disk caching and it had no effect - the picture remained the same. So, I decided to find out who is using all my RAM, and I used top for that. But, apparently, it reports that no process is using my RAM. The only process in my top is MySQL, but it is using 0.1% of RAM and 400Mb of swap. Same picture when I try to run other services or applications - all go in swap, top shows that MEM is not used (0.1% maximum for any process). top - 15:09:00 up 2:09, 2 users, load average: 0.02, 0.16, 0.11 Tasks: 112 total, 1 running, 111 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4046868k total, 4001368k used, 45500k free, 748k buffers Swap: 16777208k total, 68840k used, 16708368k free, 16632k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ SWAP COMMAND 3214 ntp 15 0 23412 5044 3916 S 0.0 0.1 0:00.00 17m ntpd 2319 root 5 -10 12648 4460 3184 S 0.0 0.1 0:00.00 8188 iscsid 2168 root RT 0 22120 3692 2848 S 0.0 0.1 0:00.00 17m multipathd 5113 mysql 18 0 474m 2356 856 S 0.0 0.1 0:00.11 472m mysqld 4106 root 34 19 251m 1944 1360 S 0.0 0.0 0:00.11 249m yum-updatesd 4109 root 15 0 90152 1904 1772 S 0.0 0.0 0:00.18 86m sshd 5175 root 15 0 90156 1896 1772 S 0.0 0.0 0:00.02 86m sshd Restart doesn't help, and, by they way is very slow, which I wouldn't normally expect on this machine (4 cores, 4Gb RAM, RAID1). So, with that - I'm pretty sure that this is not a disk cache, who is using the RAM, because normally it should have been reduced and let other processes to use RAM, rather then go to swap. So, finally, the question is - if someone has any ideas how to find out what process is actually using the memory so heavily?

    Read the article

  • Using a map with set_intersection

    - by Robin Welch
    Not used set_intersection before, but I believe it will work with maps. I wrote the following example code but it doesn't give me what I'd expect: #include <map> #include <string> #include <iostream> #include <algorithm> using namespace std; struct Money { double amount; string currency; bool operator< ( const Money& rhs ) const { if ( amount != rhs.amount ) return ( amount < rhs.amount ); return ( currency < rhs.currency ); } }; int main( int argc, char* argv[] ) { Money mn[] = { { 2.32, "USD" }, { 2.76, "USD" }, { 4.30, "GBP" }, { 1.21, "GBP" }, { 1.37, "GBP" }, { 6.74, "GBP" }, { 2.55, "EUR" } }; typedef pair< int, Money > MoneyPair; typedef map< int, Money > MoneyMap; MoneyMap map1; map1.insert( MoneyPair( 1, mn[1] ) ); map1.insert( MoneyPair( 2, mn[2] ) ); map1.insert( MoneyPair( 3, mn[3] ) ); // (3) map1.insert( MoneyPair( 4, mn[4] ) ); // (4) MoneyMap map2; map1.insert( MoneyPair( 3, mn[3] ) ); // (3) map1.insert( MoneyPair( 4, mn[4] ) ); // (4) map1.insert( MoneyPair( 5, mn[5] ) ); map1.insert( MoneyPair( 6, mn[6] ) ); map1.insert( MoneyPair( 7, mn[7] ) ); MoneyMap out; MoneyMap::iterator out_itr( out.begin() ); set_intersection( map1.begin(), map1.end(), map2.begin(), map2.end(), inserter( out, out_itr ) ); cout << "intersection has " << out.size() << " elements." << endl; return 0; } Since the pair labelled (3) and (4) appear in both maps, I was expecting that I'd get 2 elements in the intersection, but no, I get: intersection has 0 elements. I'm sure this is something to do with the comparitor on the map / pair but can't figure it out.

    Read the article

  • phablet-flash aborting while installing Ubuntu Touch on Nexus 4

    - by Till B
    I have a Nexus 4 with Android 4.3 installed and I want to flash it to Ubuntu Touch. My system is Ubuntu 12.04, running inside a virtual machine on Mac OS 10.5.8. To use the VM, I opened an NAT bridge and forwarded port 5037 for adb, I can see the Nexus with adb and e.g. use the adb shell into it. USB ports are also forwarded to the VM. I follow these instructions to the letter. My bootloader is unlocked, just as it was described in the instructions. Now I encounter different issues, when executing sudo phablet-flash ubuntu-system --no-backup. On the first run, it got stuck in this state: INFO:phablet-flash:Decompressing partitions/recovery.img from /home/till/Downloads/phablet-flash/imageupdates/pool/device-5ba3031cb0d6fc624848266edba781e3e821b6e1e8dd21105725f0ab26077d0a.tar.xz INFO:phablet-flash:Restarting device... wait INFO:phablet-flash:Restarting device... wait complete INFO:phablet-flash:Booting /tmp/tmpMSN8bm/partitions/recovery.img < waiting for device > downloading 'boot.img'... OKAY [ 1.772s] booting... OKAY [ 0.005s] finished. total time: 1.779s INFO:phablet-flash:Waiting for recovery image to boot The following happened: around the line "INFO:phablet-flash: Restarting...", it rebooted into the bootloader. The bootloader shows only for two seconds, then the screen goes off and the phone stays off. But I do notice, that the screen is not off - it is just black, but the background light is on. If I wait long enough, phablet-flash aborts with ERROR:phablet-flash:Wait for recovery expired On the second try, I wanted to manually start the bootloader and choose "Recovery mode". Pressing "volume down+power" at first did nothing. Releasing the buttons and then pressing them again brought me into the bootloder. After choosing "Recovery mode", phablet-flash continued and after a while aborted with the following output: INFO:phablet-flash:Wait for recovery image to boot complete INFO:phablet-flash:Clearing /data and /cache INFO:phablet-flash:Pushing /home/till/Downloads/phablet-flash/imageupdates/pool/ubuntu-2b5345658b58e55207c4a4e7b6b3d8cd4f3d9a3187d2448fc9020c884234bac0.tar.xz to /cache/recovery/ failed to copy '/home/till/Downloads/phablet-flash/imageupdates/pool/ubuntu-2b5345658b58e55207c4a4e7b6b3d8cd4f3d9a3187d2448fc9020c884234bac0.tar.xz' to '/cache/recovery/': Permission denied ERROR:phablet-flash:Command 'adb push /home/till/Downloads/phablet-flash/imageupdates/pool/ubuntu-2b5345658b58e55207c4a4e7b6b3d8cd4f3d9a3187d2448fc9020c884234bac0.tar.xz /cache/recovery/' returned non-zero exit status 1 Removing directory /tmp/tmpDnbz6N Removing directory /tmp/tmpth4L6w What can I do to properly flash my phone with Ubuntu Touch? I noticed that adb does not show the phone in recovery mode: Typing adb devices, when the Nexus 4 is in recovery mode, shows the serial number and the state device, where it should show recovery. Should the phone be rooted? This is not mentioned in the instructions.

    Read the article

  • Integrating Flickr with ASP.Net application

    - by sreejukg
    Flickr is the popular photo management and sharing application offered by yahoo. The services from flicker allow you to store and share photos and videos online. Flicker offers strong API support for almost all services they provide. Using this API, developers can integrate photos to their public website. Since 2005, developers have collaborated on top of Flickr's APIs to build fun, creative, and gorgeous experiences around photos that extend beyond Flickr. In this article I am going to demonstrate how easily you can bring the photos stored on flicker to your website. Let me explain the scenario this article is trying to address. I have a flicker account where I upload photos and share in many ways offered by Flickr. Now I have a public website, instead of re-upload the photos again to public website, I want to show this from Flickr. Also I need complete control over what photo to display. So I went and referred the Flickr documentation and there is API support ready to address my scenario (and more… ). FlickerAPI for ASP.Net To Integrate Flicker with ASP.Net applications, there is a library available in CodePlex. You can find it here http://flickrnet.codeplex.com/ Visit the URL and download the latest version. The download includes a Zip file, when you unzip you will get a number of dlls. Since I am going to use ASP.Net application, I need FlickrNet.dll. See the screenshot of all the dlls, and there is a help file available in the download (.chm) for your reference. Once you have the dll, you need to use Flickr API from your website. I assume you have a flicker account and you are familiar with Flicker services. Arrange your photos using Sets in Flickr In flicker, you can define sets and add your uploaded photos to sets. You can compare set to photo album. A set is a logical collection of photos, which is an excellent option for you to categorize your photos. Typically you will have a number of sets each set having few photos. You can write application that brings photos from sets to your website. For the purpose of this article I already created a set Flickr and added some photos to it. Once you logged in to Flickr, you can see the Sets under the Menu. In the Sets page, you will see all the sets you have created. As you notice, you can see certain sample images I have uploaded just to test the functionality. Though I wish I couldn’t create good photos so please bear with me. I have created 2 photo sets named Blue Album and Red Album. Click on the image for the set, will take you to the corresponding set page. In the set “Red Album” there are 4 photos and the set has a unique ID (highlighted in the URL). You can simply retrieve the photos with the set id from your application. In this article I am going to retrieve the images from Red album in my ASP.Net page. For that First I need to setup FlickrAPI for my usage. Configure Flickr API Key As I mentioned, we are going to use Flickr API to retrieve the photos stored in Flickr. In order to get access to Flickr API, you need an API key. To create an API key, navigate to the URL http://www.flickr.com/services/apps/create/ Click on Request an API key link, now you need to tell Flickr whether your application in commercial or non-commercial. I have selected a non-commercial key. Now you need to enter certain information about your application. Once you enter the details, Click on the submit button. Now Flickr will create the API key for your application. Generating non-commercial API key is very easy, in couple of steps the key will be generated and you can use the key in your application immediately. ASP.Net application for retrieving photos Now we need write an ASP.Net application that display pictures from Flickr. Create an empty web application (I named this as FlickerIntegration) and add a reference to FlickerNet.dll. Add a web form page to the application where you will retrieve and display photos(I have named this as Gallery.aspx). After doing all these, the solution explorer will look similar to following. I have used the below code in the Gallery.aspx page. The output for the above code is as follows. I am going to explain the code line by line here. First it is adding a reference to the FlickrNet namespace. using FlickrNet; Then create a Flickr object by using your API key. Flickr f = new Flickr("<yourAPIKey>"); Now when you retrieve photos, you can decide what all fields you need to retrieve from Flickr. Every photo in Flickr contains lots of information. Retrieving all will affect the performance. For the demonstration purpose, I have retrieved all the available fields as follows. PhotoSearchExtras.All But if you want to specify the fields you can use logical OR operator(|). For e.g. the following statement will retrieve owner name and date taken. PhotoSearchExtras extraInfo = PhotoSearchExtras.OwnerName | PhotoSearchExtras.DateTaken; Then retrieve all the photos from a photo set using PhotoSetsGetPhotos method. I have passed the PhotoSearchExtras object created earlier. PhotosetPhotoCollection photos = f.PhotosetsGetPhotos("72157629872940852", extraInfo); The PhotoSetsGetPhotos method will return a collection of Photo objects. You can just navigate through the collection using a foreach statement. foreach (Photo p in photos) {     //access each photo properties } Photo class have lot of properties that map with the properties from Flickr. The chm documentation comes along with the CodePlex download is a great asset for you to understand the fields. In the above code I just used the following p.LargeUrl – retrieves the large image url for the photo. p.ThumbnailUrl – retrieves the thumbnail url for the photo p.Title – retrieves the Title of the photo p.DateUploaded – retrieves the date of upload Visual Studio intellisense will give you all properties, so it is easy, you can just try with Visual Studio intellisense to find the right properties you are looking for. Most of hem are self-explanatory. So you can try retrieving the required properties. In the above code, I just pushed the photos to the page. In real time you can use the retrieved photos along with JQuery libraries to create animated photo galleries, slideshows etc. Configuration and Troubleshooting If you get access denied error while executing the code, you need to disable the caching in Flickr API. FlickrNet cache the photos to your local disk when retrieved. You can specify a cache folder where the application need write permission. You can specify the Cache folder in the code as follows. Flickr.CacheLocation = Server.MapPath("./FlickerCache/"); If the application doesn’t have have write permission to the cache folder, the application will throw access denied error. If you cannot give write permission to the cache folder, then you must disable the caching. You can do this from code as follows. Flickr.CacheDisabled = true; Disabling cache will have an impact on the performance. Take care! Also you can define the Flickr settings in web.config file.You can find the documentation here. http://flickrnet.codeplex.com/wikipage?title=ExampleConfigFile&ProjectName=flickrnet Flickr is a great place for storing and sharing photos. The API access allows developers to do seamless integration with the photos uploaded on Flickr.

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

  • Cannot install Acrobat Reader on Ubuntu 12.04.1

    - by user91137
    I have been trying to install Acrobat Reader on my Ubuntu 12.04.1. In the be;ggining, I tried to install it from the software-center, but it crashes with the report: (Reading database ... 189311 files and directories currently installed.) Unpacking acroread (from .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: error processing /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): trying to overwrite '/usr/bin/acroread', which is also in package adobereader-ptb 8.1.7-2 No apport report written because MaxReports is reached already Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Errors were encountered while processing: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb" As as solution, I tried to install it via terminal, with the $sudo apt-get install acroread and receive the following: arcanjo@arcanjo:~$ sudo apt-get install acroread Lendo listas de pacotes... Pronto Construindo árvore de dependências Lendo informação de estado... Pronto Pacotes sugeridos: libldap2 libgnome-speech7 Os NOVOS pacotes a seguir serão instalados: acroread 0 pacotes atualizados, 1 pacotes novos instalados, 0 a serem removidos e 0 não atualizados. É preciso baixar 60,1 MB de arquivos. Depois desta operação, 142 MB adicionais de espaço em disco serão usados. Obter:1 http://archive.canonical.com/ubuntu/ precise/partner acroread i386 9.5.1-1precise1 [60,1 MB] Baixados 60,1 MB em 4min 17s (234 kB/s) (Lendo banco de dados ... 189311 ficheiros e directórios actualmente instalados.) Desempacotando acroread (de .../acroread_9.5.1-1precise1_i386.deb) ... dpkg: erro processando /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb (--unpack): a tentar sobre-escrever '/usr/bin/acroread', que também está no pacote adobereader-ptb 8.1.7-2 Nenhum relatório apport escrito pois MaxReports já foi atingido Processando gatilhos para desktop-file-utils ... Processando gatilhos para bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processando gatilhos para gnome-menus ... Processando gatilhos para man-db ... Erros foram encontrados durante o processamento de: /var/cache/apt/archives/acroread_9.5.1-1precise1_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) arcanjo@arcanjo:~$ I've already tried to upgrade and update the apt-get, also tried to remove and re-install the software-center, tried deleting the "problematic" files and re-updating the apt-get... Nothing seems to work... Any solutions?

    Read the article

  • Security Controls on data for P6 Analytics

    - by Jeffrey McDaniel
    The Star database and P6 Analytics calculates security based on P6 security using OBS, global, project, cost, and resource security considerations. If there is some concern that users are not seeing expected data in P6 Analytics here are some areas to review: 1. Determining if a user has cost security is based on the Project level security privileges - either View Project Costs/Financials or Edit EPS Financials. If expecting to see costs make sure one of these permissions are allocated.  2. User must have OBS access on a Project. Not WBS level. WBS level security is not supported. Make sure user has OBS on project level.  3. Resource Access is determined by what is granted in P6. Verify the resource access granted to this user in P6. Resource security is hierarchical. Project access will override Resource access based on the way security policies are applied. 4. Module access must be given to a P6 user for that user to come over into Star/P6 Analytics. For earlier version of RDB there was a report_user_flag on the Users table. This flag field is no longer used after P6 Reporting Database 2.1. 5. For P6 Reporting Database versions 2.2 and higher, the Extended Schema Security service must be run to calculate all security. Any changes to privileges or security this service must be rerun before any ETL. 6. In P6 Analytics 2.0 or higher, a Weblogic user must exist that matches the P6 username. For example user Tim must exist in P6 and Weblogic users for Tim to be able to log into P6 Analytics and access data based on  P6 security.  In earlier versions the username needed to exist in RPD. 7. Cache in OBI is another area that can sometimes make it seem a user isn't seeing the data they expect. While cache can be beneficial for performance in OBI. If the data is outdated it can retrieve older, stale data. Clearing or turning off cache when rerunning a query can determine if the returned result set was from cache or from the database.

    Read the article

  • Disk Drive not working

    - by user287681
    The CD/DVD drive on my sisters' (I'm helping her shift from Win. XP (now officially deprecated by Microsoft) to Ubuntu) system. Now, it may end up being a failed attempt, all together (Almost the whole last year (when she's been on XP) the disk drive hasn't (not even powering on) been working.), I just want to make sure I've explored every remote possibility. Because I figure, "Huh, now that I've got Ubuntu running, instead of XP, that (just) might make a difference.". I have tried using the sudo lshw command in the terminal, to (seemingly) no avil, but, who knows, you might be able to make something out of it. Here's the output: kyra@kyra-Satellite-P105:~$ sudo lshw [sudo] password for kyra: kyra-satellite-p105 description: Notebook product: Satellite P105 () vendor: TOSHIBA version: PSPA0U-0TN01M serial: 96084354W width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: administrator_password=disabled boot=oem-specific chassis=notebook frontpanel_password=unknown keyboard_password=unknown power-on_password=disabled uuid=00900559-F88E-D811-82E0-00163680E992 *-core description: Motherboard product: Satellite P105 vendor: TOSHIBA physical id: 0 version: Not Applicable serial: 1234567890 *-firmware description: BIOS vendor: TOSHIBA physical id: 0 version: V4.70 date: 01/19/20092 size: 92KiB capabilities: isa pci pcmcia pnp upgrade shadowing escd cdboot acpi usb biosbootspecification *-cpu description: CPU product: Intel(R) Core(TM)2 CPU T5500 @ 1.66GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: Intel(R) Core(TM)2 CPU T5 slot: U2E1 size: 1667MHz capacity: 1667MHz width: 64 bits clock: 166MHz capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx x86-64 constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dtherm cpufreq *-cache:0 description: L1 cache physical id: 5 slot: L1 Cache size: 16KiB capacity: 16KiB capabilities: asynchronous internal write-back *-cache:1 description: L2 cache physical id: 6 slot: L2 Cache size: 2MiB capabilities: burst external write-back *-memory description: System Memory physical id: c slot: System board or motherboard size: 2GiB capacity: 3GiB *-bank:0 description: SODIMM DDR2 Synchronous physical id: 0 slot: M1 size: 1GiB width: 64 bits *-bank:1 description: SODIMM DDR2 Synchronous physical id: 1 slot: M2 size: 1GiB width: 64 bits *-pci description: Host bridge product: Mobile 945GM/PM/GMS, 943/940GML and 945GT Express Memory Controller Hub vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 03 width: 32 bits clock: 33MHz configuration: driver=agpgart-intel resources: irq:0 *-display:0 description: VGA compatible controller product: Mobile 945GM/GMS, 943/940GML Express Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 03 width: 32 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:16 memory:d0200000-d027ffff ioport:1800(size=8) memory:c0000000-cfffffff memory:d0300000-d033ffff *-display:1 UNCLAIMED description: Display controller product: Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 03 width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: latency=0 resources: memory:d0280000-d02fffff *-multimedia description: Audio device product: NM10/ICH7 Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 02 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:44 memory:d0340000-d0343fff *-pci:0 description: PCI bridge product: NM10/ICH7 Family PCI Express Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: 02 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:3000(size=4096) memory:84000000-841fffff ioport:84200000(size=2097152) *-pci:1 description: PCI bridge product: NM10/ICH7 Family PCI Express Port 2 vendor: Intel Corporation physical id: 1c.1 bus info: pci@0000:00:1c.1 version: 02 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:41 ioport:4000(size=4096) memory:84400000-846fffff ioport:84700000(size=2097152) *-network description: Wireless interface product: PRO/Wireless 3945ABG [Golan] Network Connection vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 02 serial: 00:13:02:d6:d2:35 width: 32 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwl3945 driverversion=3.13.0-29-generic firmware=15.32.2.9 ip=10.110.20.157 latency=0 link=yes multicast=yes wireless=IEEE 802.11abg resources: irq:43 memory:84400000-84400fff *-pci:2 description: PCI bridge product: NM10/ICH7 Family PCI Express Port 3 vendor: Intel Corporation physical id: 1c.2 bus info: pci@0000:00:1c.2 version: 02 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:42 ioport:5000(size=4096) memory:84900000-84afffff ioport:84b00000(size=2097152) *-usb:0 description: USB controller product: NM10/ICH7 Family USB UHCI Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:23 ioport:1820(size=32) *-usb:1 description: USB controller product: NM10/ICH7 Family USB UHCI Controller #2 vendor: Intel Corporation physical id: 1d.1 bus info: pci@0000:00:1d.1 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:19 ioport:1840(size=32) *-usb:2 description: USB controller product: NM10/ICH7 Family USB UHCI Controller #3 vendor: Intel Corporation physical id: 1d.2 bus info: pci@0000:00:1d.2 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:18 ioport:1860(size=32) *-usb:3 description: USB controller product: NM10/ICH7 Family USB UHCI Controller #4 vendor: Intel Corporation physical id: 1d.3 bus info: pci@0000:00:1d.3 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:16 ioport:1880(size=32) *-usb:4 description: USB controller product: NM10/ICH7 Family USB2 EHCI Controller vendor: Intel Corporation physical id: 1d.7 bus info: pci@0000:00:1d.7 version: 02 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci-pci latency=0 resources: irq:23 memory:d0544000-d05443ff *-pci:3 description: PCI bridge product: 82801 Mobile PCI Bridge vendor: Intel Corporation physical id: 1e bus info: pci@0000:00:1e.0 version: e2 width: 32 bits clock: 33MHz capabilities: pci subtractive_decode bus_master cap_list resources: ioport:2000(size=4096) memory:d0000000-d00fffff ioport:80000000(size=67108864) *-pcmcia description: CardBus bridge product: PCIxx12 Cardbus Controller vendor: Texas Instruments physical id: 4 bus info: pci@0000:0a:04.0 version: 00 width: 32 bits clock: 33MHz capabilities: pcmcia bus_master cap_list configuration: driver=yenta_cardbus latency=176 maxlatency=5 mingnt=192 resources: irq:17 memory:d0004000-d0004fff ioport:2400(size=256) ioport:2800(size=256) memory:80000000-83ffffff memory:88000000-8bffffff *-firewire description: FireWire (IEEE 1394) product: PCIxx12 OHCI Compliant IEEE 1394 Host Controller vendor: Texas Instruments physical id: 4.1 bus info: pci@0000:0a:04.1 version: 00 width: 32 bits clock: 33MHz capabilities: pm ohci bus_master cap_list configuration: driver=firewire_ohci latency=64 maxlatency=4 mingnt=3 resources: irq:17 memory:d0007000-d00077ff memory:d0000000-d0003fff *-storage description: Mass storage controller product: 5-in-1 Multimedia Card Reader (SD/MMC/MS/MS PRO/xD) vendor: Texas Instruments physical id: 4.2 bus info: pci@0000:0a:04.2 version: 00 width: 32 bits clock: 33MHz capabilities: storage pm bus_master cap_list configuration: driver=tifm_7xx1 latency=64 maxlatency=4 mingnt=7 resources: irq:17 memory:d0005000-d0005fff *-generic description: SD Host controller product: PCIxx12 SDA Standard Compliant SD Host Controller vendor: Texas Instruments physical id: 4.3 bus info: pci@0000:0a:04.3 version: 00 width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list configuration: driver=sdhci-pci latency=64 maxlatency=4 mingnt=7 resources: irq:17 memory:d0007800-d00078ff *-network description: Ethernet interface product: PRO/100 VE Network Connection vendor: Intel Corporation physical id: 8 bus info: pci@0000:0a:08.0 logical name: eth0 version: 02 serial: 00:16:36:80:e9:92 size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e100 driverversion=3.5.24-k2-NAPI duplex=half latency=64 link=no maxlatency=56 mingnt=8 multicast=yes port=MII speed=10Mbit/s resources: irq:20 memory:d0006000-d0006fff ioport:2000(size=64) *-isa description: ISA bridge product: 82801GBM (ICH7-M) LPC Interface Bridge vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 02 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: driver=lpc_ich latency=0 resources: irq:0 *-ide description: IDE interface product: 82801GBM/GHM (ICH7-M Family) SATA Controller [IDE mode] vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 02 width: 32 bits clock: 66MHz capabilities: ide pm bus_master cap_list configuration: driver=ata_piix latency=0 resources: irq:19 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:18b0(size=16) *-serial UNCLAIMED description: SMBus product: NM10/ICH7 Family SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 02 width: 32 bits clock: 33MHz configuration: latency=0 resources: ioport:18c0(size=32) *-scsi physical id: 1 logical name: scsi0 capabilities: emulated *-disk description: ATA Disk product: ST9250421AS vendor: Seagate physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: SD13 serial: 5TH0B2HB size: 232GiB (250GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sectorsize=512 signature=000d7fd5 *-volume:0 description: EXT4 volume vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 logical name: / version: 1.0 serial: 13bb4bdd-8cc9-40e2-a490-dbe436c2a02d size: 230GiB capacity: 230GiB capabilities: primary bootable journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized configuration: created=2014-06-01 17:37:01 filesystem=ext4 lastmountpoint=/ modified=2014-06-01 21:15:21 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,data=ordered mounted=2014-06-01 21:15:21 state=mounted *-volume:1 description: Extended partition physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 size: 2037MiB capacity: 2037MiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume description: Linux swap / Solaris partition physical id: 5 logical name: /dev/sda5 capacity: 2037MiB capabilities: nofs *-remoteaccess UNCLAIMED vendor: Intel physical id: 1 capabilities: inbound kyra@kyra-Satellite-P105:~$

    Read the article

  • Docky have stopped working since update

    - by Fraekkert
    I'm running Ubuntu 10.10 64-bit. I'm using the Docky ppa, and since the latest update It won't start. If i run it from the terminal, this is what i get: [Info 09:21:19.005] Docky version: 2.1.0 bzr docky r1761 ppa [Info 09:21:19.024] Kernel version: 2.6.35.24 [Info 09:21:19.026] CLR version: 2.0.50727.1433 [Debug 09:21:19.493] [UserArgs] BufferTime = 0 [Debug 09:21:19.494] [UserArgs] MaxSize = 2147483647 [Debug 09:21:19.494] [UserArgs] NetbookMode = False [Debug 09:21:19.494] [UserArgs] NoPollCursor = False [Debug 09:21:19.528] [SystemService] Using org.freedesktop.UPower for battery information [Info 09:21:19.564] [ThemeService] Setting theme: Transparent [Debug 09:21:19.587] [DesktopItemService] Loading remap file '/usr/share/docky/remaps.ini'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'Picasa3.exe' to 'picasa'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'nbexec' to 'netbeans'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'deja-dup-preferences' to 'deja-dup'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'VirtualBox' to 'virtualbox'. [Warn 09:21:19.600] [DesktopItemService] Could not find remap file '/home/lasse/.local/share/docky/remaps.ini'! [Debug 09:21:19.602] [DesktopItemService] Loading desktop item cache '/home/lasse/.cache/docky/docky.desktop.en_DK.utf8.cache'. [Info 09:21:20.101] [DockServices] Dock services initialized. [Debug 09:21:20.134] [DBusManager] DBus Registered: org.gnome.Docky [Debug 09:21:20.142] [DBusManager] DBus Registered: net.launchpad.DockManager Stacktrace: at (wrapper managed-to-native) System.IO.MonoIO.Read (intptr,byte[],int,int,System.IO.MonoIOError&) <IL 0x00012, 0x00062> at (wrapper managed-to-native) System.IO.MonoIO.Read (intptr,byte[],int,int,System.IO.MonoIOError&) <IL 0x00012, 0x00062> at System.IO.FileStream.ReadData (intptr,byte[],int,int) <IL 0x00009, 0x00047> at System.IO.FileStream.RefillBuffer () <IL 0x0001c, 0x0002b> at System.IO.FileStream.ReadByte () <IL 0x00079, 0x000c7> at Mono.Addins.Serialization.BinaryXmlReader.ReadNext () <IL 0x0000b, 0x00031> at Mono.Addins.Serialization.BinaryXmlReader.Skip () <IL 0x0003c, 0x00053> at Mono.Addins.Serialization.BinaryXmlReader.Skip () <IL 0x00047, 0x0005f> at Mono.Addins.Serialization.BinaryXmlReader.Skip () <IL 0x00047, 0x0005f> And this .Skip () continues infinitely, and very fast. I've tried cleaning the cache and reinstalling docky, but without luck.

    Read the article

  • Can't remove burg theme packages

    - by Lassi
    Today after trying to install and remove BURG and few themes I faced an issue. Now I can't install or remove anything. Here is the output (unfortunately partly in Finnish, I couldn't change language since it also seems to depend on package listings: lassi@lassi-ubuntu:~$ sudo apt-get autoremove Luetaan pakettiluetteloita... Valmis Muodostetaan riippuvuussuhteiden puu Luetaan tilatietoja... Valmis Seuraavat paketit POISTETAAN: burg-theme-fortune burg-theme-gnome burg-theme-picchio 0 päivitetty, 0 uutta asennusta, 3 poistettavaa ja 0 päivittämätöntä. 3 ei asennettu kokonaan tai poistettiin. Toiminnon jälkeen vapautuu 7 180 k t levytilaa. Haluatko jatkaa [K/e]? k (Luetaan tietokantaa... 166462 files and directories currently installed.) Poistetaan pakettia burg-theme-fortune... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-fortune (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-gnome... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-gnome (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-picchio... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-picchio (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Käsittelyssä tapahtui liian monta virhettä: burg-theme-fortune burg-theme-gnome burg-theme-picchio E: Sub-process /usr/bin/dpkg returned an error code (1) Basically what seems to happen is this: It creates the package lists, then tries to remove packet burg-theme-fortune. This fails as update-burg command was not found. Then dpkg reports an error while processing the packet. Same goes with all 3 packages. In the end it claims that there were too many errors, and packages stay installed. I also tried installing burg as it tries to run command update-burg, but appears that it tries to delete these packages always when I try to install or remove or do anything with apt. Any ideas how I could solve this issue? Edit: Here is the output of apt-get install burg (tried installing again to get English output) lassi@lassi-ubuntu:~$ LC_ALL=C sudo apt-get install burg [sudo] password for lassi: Reading package lists... Done Building dependency tree Reading state information... Done burg is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/6169 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 167497 files and directories currently installed.) Preparing to replace burg-theme-fortune 0.5.0-1 (using .../burg-theme-fortune_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-fortune ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-gnome 0.5.0-1 (using .../burg-theme-gnome_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-gnome ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-picchio 0.5.0-1 (using .../burg-theme-picchio_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-picchio ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) lassi@lassi-ubuntu:~$

    Read the article

  • Package dependency errors : libc

    - by piyush
    I was trying to install kde-full when the libc had some unmet dependencies error. When saying sudo apt-get install kde-full the terminal has this in the end libc6 : Depends: libc-bin (= 2.15-0ubuntu10) libc6:i386 : Depends: libc-bin:i386 (= 2.15-0ubuntu10) libc6-dev : Depends: libc6 (= 2.15-0ubuntu10.3) but 2.15-0ubuntu10 is to be installed libc6-i386 : Depends: libc6 (= 2.15-0ubuntu10.3) but 2.15-0ubuntu10 is to be installed When running sudo apt-get -f install, this shows up at the end De-configuring libc6:i386 ... A copy of the C library was found in an unexpected directory: '/lib/x86_64-linux-gnu/libc-2.15.so' It is not safe to upgrade the C library in this situation; please remove that copy of the C library or get it out of '/lib/x86_64-linux-gnu' and try again. dpkg: error processing /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Preparing to replace libc6:i386 2.15-0ubuntu10 (using .../libc6_2.15-0ubuntu10.3_i386.deb) ... De-configuring libc6 ... A copy of the C library was found in an unexpected directory: '/lib/i386-linux-gnu/ld-2.15.so' It is not safe to upgrade the C library in this situation; please remove that copy of the C library or get it out of '/lib/i386-linux-gnu' and try again. dpkg: error processing /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_amd64.deb /var/cache/apt/archives/libc6_2.15-0ubuntu10.3_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Any suggestions how to fix this. I don't desire to have kde-full anymore; only that other installations should work. I've done sudo apt-get update several times, so those suggestions can be kept away UPD : here is output of dpkg configure ~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of libc6-dev: libc6-dev depends on libc6 (= 2.15-0ubuntu10.3); however: Version of libc6 on system is 2.15-0ubuntu10. dpkg: error processing libc6-dev (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of libc6-i386: libc6-i386 depends on libc6 (= 2.15-0ubuntu10.3); however: Version of libc6 on system is 2.15-0ubuntu10. dpkg: error processing libc6-i386 (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libc6-dev libc6-i386 ~$

    Read the article

  • Query optimization using composite indexes

    - by xmarch
    Many times, during the process of creating a new Coherence application, developers do not pay attention to the way cache queries are constructed; they only check that these queries comply with functional specs. Later, performance testing shows that these perform poorly and it is then when developers start working on improvements until the non-functional performance requirements are met. This post describes the optimization process of a real-life scenario, where using a composite attribute index has brought a radical improvement in query execution times.  The execution times went down from 4 seconds to 2 milliseconds! E-commerce solution based on Oracle ATG – Endeca In the context of a new e-commerce solution based on Oracle ATG – Endeca, Oracle Coherence has been used to calculate and store SKU prices. In this architecture, a Coherence cache stores the final SKU prices used for Endeca baseline indexing. Each SKU price is calculated from a base SKU price and a series of calculations based on information from corporate global discounts. Corporate global discounts information is stored in an auxiliary Coherence cache with over 800.000 entries. In particular, to obtain each price the process needs to execute six queries over the global discount cache. After the implementation was finished, we discovered that the most expensive steps in the price calculation discount process were the global discounts cache query. This query has 10 parameters and is executed 6 times for each SKU price calculation. The steps taken to optimise this query are described below; Starting point Initial query was: String filter = "levelId = :iLevelId AND  salesCompanyId = :iSalesCompanyId AND salesChannelId = :iSalesChannelId "+ "AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND brand = :iBrand AND manufacturer = :iManufacturer "+ "AND areaId = :iAreaId AND endDate >=  :iEndDate AND startDate <= :iStartDate"; Map<String, Object> params = new HashMap<String, Object>(10); // Fill all parameters. params.put("iLevelId", xxxx); // Executing filter. Filter globalDiscountsFilter = QueryHelper.createFilter(filter, params); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(globalDiscountsFilter); With the small dataset used for development the cache queries performed very well. However, when carrying out performance testing with a real-world sample size of 800,000 entries, each query execution was taking more than 4 seconds. First round of optimizations The first optimisation step was the creation of separate Coherence index for each of the 10 attributes used by the filter. This avoided object deserialization while executing the query. Each index was created as follows: globalDiscountsCache.addIndex(new ReflectionExtractor("getXXX" ) , false, null); After adding these indexes the query execution time was reduced to between 450 ms and 1s. However, these execution times were still not good enough.  Second round of optimizations In this optimisation phase a Coherence query explain plan was used to identify how many entires each index reduced the results set by, along with the cost in ms of executing that part of the query. Though the explain plan showed that all the indexes for the query were being used, it also showed that the ordering of the query parameters was "sub-optimal".  Parameters associated to object attributes with high-cardinality should appear at the beginning of the filter, or more specifically, the attributes that filters out the highest of number records should be placed at the beginning. But examining corporate global discount data we realized that depending on the values of the parameters used in the query the “good” order for the attributes was different. In particular, if the attributes brand and family had specific values it was more optimal to have a different query changing the order of the attributes. Ultimately, we ended up with three different optimal variants of the query that were used in its relevant cases: String filter = "brand = :iBrand AND familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "familyId = :iFamilyId AND departmentId = :iDepartmentId AND levelId = :iLevelId AND brand = :iBrand "+ "AND manufacturer = :iManufacturer AND endDate >=  :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId  AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; String filter = "brand = :iBrand AND departmentId = :iDepartmentId AND familyId = :iFamilyId AND levelId = :iLevelId "+ "AND manufacturer = :iManufacturer AND endDate >= :iEndDate AND salesCompanyId = :iSalesCompanyId "+ "AND areaId = :iAreaId AND salesChannelId = :iSalesChannelId AND startDate <= :iStartDate"; Using the appropriate query depending on the value of brand and family parameters the query execution time dropped to between 100 ms and 150 ms. But these these execution times were still not good enough and the solution was cumbersome. Third and last round of optimizations The third and final optimization was to introduce a composite index. However, this did mean that it was not possible to use the Coherence Query Language (CohQL), as composite indexes are not currently supporte in CohQL. As the original query had 8 parameters using EqualsFilter, 1 using GreaterEqualsFilter and 1 using LessEqualsFilter, the composite index was built for the 8 attributes using EqualsFilter. The final query had an EqualsFilter for the multiple extractor, a GreaterEqualsFilter and a LessEqualsFilter for the 2 remaining attributes.  All individual indexes were dropped except the ones being used for LessEqualsFilter and GreaterEqualsFilter. We were now running in an scenario with an 8-attributes composite filter and 2 single attribute filters. The composite index created was as follows: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); globalDiscountsCache.addIndex(me, false, null); And the final query was: ValueExtractor[] ve = { new ReflectionExtractor("getSalesChannelId" ), new ReflectionExtractor("getLevelId" ),    new ReflectionExtractor("getAreaId" ), new ReflectionExtractor("getDepartmentId" ),    new ReflectionExtractor("getFamilyId" ), new ReflectionExtractor("getManufacturer" ),    new ReflectionExtractor("getBrand" ), new ReflectionExtractor("getSalesCompanyId" )}; MultiExtractor me = new MultiExtractor(ve); // Fill composite parameters.String SalesCompanyId = xxxx;...AndFilter composite = new AndFilter(new EqualsFilter(me,                   Arrays.asList(iSalesChannelId, iLevelId, iAreaId, iDepartmentId, iFamilyId, iManufacturer, iBrand, SalesCompanyId)),                                     new GreaterEqualsFilter(new ReflectionExtractor("getEndDate" ), iEndDate)); AndFilter finalFilter = new AndFilter(composite, new LessEqualsFilter(new ReflectionExtractor("getStartDate" ), iStartDate)); NamedCache globalDiscountsCache = CacheFactory.getCache(CacheConstants.GLOBAL_DISCOUNTS_CACHE_NAME); Set applicableDiscounts = globalDiscountsCache.entrySet(finalFilter);      Using this composite index the query improved dramatically and the execution time dropped to between 2 ms and  4 ms.  These execution times completely met the non-functional performance requirements . It should be noticed than when using the composite index the order of the attributes inside the ValueExtractor was not relevant.

    Read the article

  • SharePoint 2010 HierarchicalConfig Caching Problem

    - by Damon
    We've started using the Application Foundations for SharePoint 2010 in some of our projects at work, and I came across a nasty issue with the hierarchical configuration settings.  I have some settings that I am storing at the Farm level, and as I was testing my code it seemed like the settings were not being saved - at least that is what it appeared was the case at first.  However, I happened to reset IIS and the settings suddenly appeared.  Immediately, I figured that it must be a caching issue and dug into the code base.  I found that there was a 10 second caching mechanism in the SPFarmPropertyBag and the SPWebAppPropertyBag classes.  So I ran another test where I waited 10 seconds to make sure that enough time had passed to force the caching mechanism to reset the data.  After 10 minutes the cache had still not cleared.  After digging a bit further, I found a double lock check that looked a bit off in the GetSettingsStore() method of the SPFarmPropertyBag class: if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval)) { //Need to exist so don't deadlock. rrLock.EnterWriteLock(); try { //make sure first another thread didn't already load... if (_settingStore == null) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } } finally { rrLock.ExitWriteLock(); } } What ends up happening here is the outer check determines if the _settingStore is null or the cache has expired, but the inner check is just checking if the _settingStore is null (which is never the case after the first time it's been loaded).  Ergo, the cached settings are never reset.  The fix is really easy, just add the cache checking back into the inner if statement. //make sure first another thread didn't already load... if (_settingStore == null || (DateTime.Now.Subtract(lastLoad).TotalSeconds) > cacheInterval) { _settingStore = WebAppSettingStore.Load(this.webApplication); lastLoad = DateTime.Now; } And then it starts working just fine. as long as you wait at least 10 seconds for the cache to clear.

    Read the article

  • After juju deploy : [Errno 2] No such file or directory

    - by user84471
    I get this after juju deploy. hsf@ubuntuserver:~$ juju deploy wordpress 2012-10-23 13:12:15,663 INFO Searching for charm cs:precise/wordpress in charm store [Errno 2] No such file or directory: '/home/hsf/.juju/cache/downloads/cs_3a_precise_2f_wordpress-9.charmGLg7sI.part' 2012-10-23 13:12:16,075 ERROR [Errno 2] No such file or directory: '/home/hsf/.juju/cache/downloads/cs_3a_precise_2f_wordpress-9.charmGLg7sI.part' I am using Ubuntu 12.04 server.

    Read the article

  • RTMPDUMPTV problem

    - by ranavita
    (Reading database ... 459988 files and directories currently installed.) Unpacking librtmp-dev (from .../librtmp-dev_2.4~20110711.gitc28f1bab-1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/librtmp-dev_2.4~20110711.gitc28f1bab-1_amd64.deb (--unpack): trying to overwrite '/usr/include/librtmp/amf.h', which is also in package rtmpdump 2.5-0ubuntu2~precise Errors were encountered while processing: /var/cache/apt/archives/librtmp-dev_2.4~20110711.gitc28f1bab-1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) above error code results after running sudo apt-get install -f A I am trying to fix the broken package of rtmptv

    Read the article

  • After upgrading to 13.10, no applications are visible in the application lens

    - by moeso
    I upgraded from 13.04 to 13.10 and now all icons in the application lens are gone. This is what I tried so far: apt-get install --reinstall unity-lens-applications unity --replace and unity --reset-icons moving ~/.config to ~/.config2 deleting ~/.cache/software-center and ~/.cache/unity Most of these things have been suggested in this question: Unity Applications lens is empty - but all to no avail.

    Read the article

  • ???????/???Web????????????!??????????????

    - by Yusuke.Yamamoto
    ????? ??:2010/11/04 ??:??????/?? ???????????????????·????????????????????????????????????????????????????????Web?????????????????????????????????????????????????????DB?? Oracle TimesTen In-Memory Database ???????????????????????????? ????????????????????????Oracle In-Memory Database Cache ?????Oracle TimesTen IMDB / Oracle IMDB Cache 11g ?????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/TT11041100.wmv http://www.oracle.com/technology/global/jp/ondemand/otn-seminar/pdf/TimesTen_OrD_20101104_print.pdf

    Read the article

  • Don&rsquo;t Kill the Password

    - by Anthony Trudeau
    A week ago Mr. Honan from Wired.com penned an article on security he titled “Kill the Password: Why a String of Characters Can’t Protect Us Anymore.” He asserts that the password is not effective and a new solution is needed. Unfortunately, Mr. Honan was a victim of hacking. As a result he has a victim’s vendetta. His conclusion is ill conceived even though there are smatterings of truth and good advice. The password is a security barrier much like a lock on your door. In of itself it’s not guaranteeing protection. You can have a good password akin to a steel reinforced door with the best lock money can buy, or you can have a poor password like “password” which is like a sliding lock like on a bathroom stall. But, just like in the real world a lock isn’t always enough. You can have a lock, security system, video cameras, guard dogs, and even armed security guards; but none of that guarantees your protection. Even top secret government agencies can be breached by someone who is just that good (as dramatized in movies like Mission Impossible). And that’s the crux of it. There are real hackers out there that are that good. Killer coding ninja monkeys do exist! We still have locks on our doors, because they still serve their role. Passwords are no different. Security doesn’t end with the password. Most people would agree that stuffing your mattress with your life savings isn’t a good idea even if you have the best locks and security system. Most people agree its safest to have the money in a bank. Essentially this is compartmentalization. Compartmentalization extends to the online world as well. You’re at risk if your online banking accounts are linked to the same account as your social networks. This is especially true if you’re lackadaisical about linking those social networks to outside sources including apps. The object here is to minimize the damage that can be done. An attacker should not be able to get into your bank account, because they breached your Twitter account. It’s time to prioritize once you’ve compartmentalized. This simply means deciding how much security you want for the different compartments which I’ll call security zones. Social networking applications like Facebook provide a lot of security features. However, security features are almost always a compromise with privacy and convenience. It’s similar to an engineering adage, but in this case it’s security, convenience, and privacy – pick two. For example, you might use a safe instead of bank to store your money, because the convenience of having your money closer or the privacy of not having the bank records is more important than the added security. The following are lists of security do’s and don’ts (these aren’t meant to be exhaustive and each could be an article in of themselves): Security Do’s: Use strong passwords based on a phrase Use encryption whenever you can (e.g. HTTPS in Facebook) Use a firewall (and learn to use it properly) Configure security on your router (including port blocking) Keep your operating system patched Make routine backups of important files Realize that if you’re not paying for it, you’re the product Security Don’ts Link accounts if at all possible Reuse passwords across your security zones Use real answers for security questions (e.g. mother’s maiden name) Trust anything you download Ignore message boxes shown by your system or browser Forget to test your backups Share your primary email indiscriminately Only you can decide your comfort level between convenience, privacy, and security. Attackers are going to find exploits in software. Software is complex and depends on other software. The exploits are the responsibility of the software company. But your security is always your responsibility. Complete security is an illusion. But, there is plenty you can do to minimize the risk online just like you do in the physical world. Be safe and enjoy what the Internet has to offer. I expect passwords to be necessary just as long as locks.

    Read the article

  • Free tools versus paid tools.

    - by Dennis Vroegop
    We live in a strange world. Information should be free. Tools should be free. Software should be free (and I mean free as in free beer, not as in free speech). Of course, since I make my living (and pay my mortgage) by writing software I tend to disagree. Or rather: I want to get paid for the things I do in the daytime. Next to that I also spend time on projects I feel are valuable for the community, which I do for free. The reason I can do that is because I get paid enough in the daytime to afford that time. It gives me a good feeling, I help others and it’s fun to do. But the baseline is: I get paid to write software. I am sure this goes for a lot of other developers. We get paid for what we do during the daytime and spend our free time giving back. So why does everyone always make a fuzz when a company suddenly starts to charge for software? To me, this seems like a very reasonable decision. Companies need money: they have staff to pay, buildings to rent, coffee to buy, etc. All of this doesn’t come free so it makes sense that they charge their customers for the things they produce. I know there’s a very big Open Source market out there, where companies give away (parts of) their software and get revenue out of the services they provide. But this doesn’t work if your product doesn’t need services. If you build a great tool that is very easy to use, and you give it away for free you won’t get any money by selling services that no user of your tool really needs. So what do you do? You charge money for your tool. It’s either that or stop developing the tool and turn to other, more profitable projects. Like it or not, that’s simple economics at work. You have something other people want, so you charge them for it. This week it was announced that what I believe is the most used tool for .net developers (besides Visual Studio of course),namely Red Gates .net reflector, will stop being a free tool. They will charge you $35 for the next version. Suddenly twitter was on fire and everyone was mad about it. But why? The tool is downloaded by so many developers that it must be valuable to them. I know of no serious .net developer who hasn’t got it on his or her machine. So apparently the tool gives them something they need. So why do they expect it to be free? There are developers out there maintaining and extending the tool, building new and better versions of it. And the price? $35 doesn’t seem much. If I think of the time the tool saved me the 35 dollars were earned back in a day. If by spending this amount of money I can rely on great software that helps me do my job better and faster, I have no problems by spending it. I know that there is a great team behind it, (the Red Gate tools are a must have when developing SQL systems, for instance), and I do believe they are in their right to charge this. So.. there you have it. This is of course, my opinion. You may think otherwise. Please let me know in the comments what you think! Tags van Technorati: redgate,reflector,opensource

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >