Search Results

Search found 1388 results on 56 pages for 'michelle jun lee'.

Page 50/56 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Javascript errors on internet explorer with jQuery but working fine on Firefox

    - by user994319
    A quick question I'm hoping someone can help me out with. On firefox our jQuery slider is working perfectly, however on viewing with internet explorer there are some javascript errors occurring. The website is http://foscam-uk.com/index.php Hoping there is a possible solution to this. Kind Regards and Thank You! Errors: Webpage error details User Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.3) Timestamp: Wed, 6 Jun 2012 22:36:43 UTC Message: Object doesn't support this property or method Line: 5653 Char: 9 Code: 0 URI: http://foscam-uk.com/js/prototype/prototype.js Message: Object doesn't support this property or method Line: 5988 Char: 5 Code: 0 URI: http://foscam-uk.com/js/prototype/prototype.js Message: Object doesn't support this property or method Line: 2 Char: 5 Code: 0 URI: http://foscam-uk.com/skin/frontend/default/theme316/js/scripts.js Message: Object doesn't support this property or method Line: 5736 Char: 7 Code: 0 URI: http://foscam-uk.com/js/prototype/prototype.js Message: Object doesn't support this property or method Line: 5988 Char: 5 Code: 0 URI: http://foscam-uk.com/js/prototype/prototype.js Message: Object doesn't support this property or method Line: 73 Char: 11 Code: 0 URI: http://foscam-uk.com/index.php

    Read the article

  • Simplifying a four-dimensional rule table in Matlab: addressing rows and columns of each dimension

    - by Cate
    Hi all. I'm currently trying to automatically generate a set of fuzzy rules for a set of observations which contain four values for each observation, where each observation will correspond to a state (a good example is with Fisher's Iris Data). In Matlab I am creating a four dimensional rule table where a single cell (a,b,c,d) will contain the corresponding state. To reduce the table I am following the Hong and Lee method of row and column similarity checking but I am having difficulty understanding how to address the third and fourth dimensions' rows and columns. From the method it is my understanding that each dimension is addressed individually and if the rule is true, the table is simplified. The rules for merging are as follows: If all cells in adjacent columns or rows are the same. If two cells are the same or if either of them is empty in adjacent columns or rows and at least one cell in both is not empty. If all cells in a column or row are empty and if cells in its two adjacent columns or rows are the same, merge the three. If all cells in a column or row are empty and if cells in its two adjacent columns or rows are the same or either of them is empty, merge the three. If all cells in a column or row are empty and if all the non-empty cells in the column or row to its left have the same region, and all the non-empty cells in the column or row to its right have the same region, but one different from the previously mentioned region, merge these three columns into two parts. Now for the confusing bit. Simply checking if the entire row/column is the same as the adjacent (rule 1) seems simple enough: if (a,:,:,:) == (a+1,:,:,:) (:,b,:,:) == (:,b+1,:,:) (:,:,c,:) == (:,:,c+1,:) (:,:,:,d) == (:,:,:,d+1) is this correct? but to check if the elements in the row/column match, or either is zero (rules 2 and 4), I am a bit lost. Would it be something along these lines: for a = 1:20 for i = 1:length(b) if (a+1,i,:,:) == (a,i,:,:) ... else if (a+1,i,:,:) == 0 ... else if (a,i,:,:) == 0 etc. and for the third and fourth dimensions: for c = 1:20 for i = 1:length(a) if (i,:,c,:) == (i,:,c+1,:) ... else if (i,:,c+1,:) == 0 ... else if (i,:,c,:) == 0 etc. for d = 1:20 for i = 1:length(a) if (i,:,:,d) == (i,:,:,d+1) ... else if (i,:,:,d+1) == 0 ... else if (i,:,:,d) == 0 etc. even any help with four dimensional arrays would be useful as I'm so confused by the thought of more than three! I would advise you look at the paper to understand my meaning - they themselves have used the Iris data but only given an example with a 2D table. Thanks in advance, hopefully!

    Read the article

  • getting error as Internal Server error

    - by Rishi2686
    Hi There, When I try to run my site it gives error as Internal Server error, when I refresh the page I get my result properly.The error page looks like this: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. I also checked error_log file on my server, it gives error as: [Sat Jun 12 01:21:55 2010] [error] [client 117.195.6.76] File does not exist: /home/rohit25/public_html/test/500.shtml, referer: http://www.test.mysite.com/home.php sometimes error can be; [Sat May 29 19:35:12 2010] [error] [client 97.85.189.208] File does not exist: /home2/carlton/public_html/test/favicon.ico Are there any changes required in configuration file, I also tried to involve this error code in custom error page, it shows error page, which could not resolve this issue. Your urgent help will be greatly appreciated.

    Read the article

  • I need to Loop an a formula with the Offset function until the cell is blank

    - by CEMG
    I need to Loop the formula below until Column "B" which contains dates is empty. I am stuck and I just can't seem to write the VBA Code to do the Loop until there is no more Dates in Column "B". The formula is smoothing out the yields by using those dates that have a yield. I hope anyone would be able to help me. Thanks in advance A B C D 5 Factor Date Yield Input 6 3 May-10 .25 7 1 Jun-10 8 2 Jul-10 9 3 Aug-10 0.2000 10 1 Sep-10 11 2 Oct-10 12 3 Nov-10 0.2418 13 1 Dec-10 14 2 Jan-11 15 3 Feb-11 0.3156 16 1 Mar-11 17 2 Apr-11 . Sub IsNumeric() 'IF(ISNUMBER(C6),C6, If Application.IsNumber(range("c6").Value) Then range("d6").Value = range("c6") 'IF(C6<C5,((OFFSET(C6,2,0)-OFFSET(C6,-1,0))*A6/3+OFFSET(C6,-1,0)), If range("c6").Select < range("c5").Select Then range("d6").Value = range("c6").Offset(2, 0).Select - range("c6").Offset(-1, 0).Select * (range("a6").Select / 3) + range("c6").Offset(-1, 0).Select 'IF(C6<>C7,((OFFSET(C6,1,0)-OFFSET(C6,-2,0))*(A6/3)+OFFSET(C6,-2,0)),""))) If range("c6").Select <> range("c7").Select Then range("d6").Value = (range("c6").Offset(1, 0).Select) - range("c6").Offset(-2, 0).Select * (range("a6").Select / 3) + range("c6").Offset(-2, 0).Select Else range("d6").Value = "" End If End If End If End Sub

    Read the article

  • Macports and virtualenv site-packages Fallback

    - by Streeter
    I've installed django and python as this link suggested with macports. However, I'd like to use virtualenv to install more packages. My understanding is that if I do not pass in the --no-site-packages to virtualenv, I should get the currently installed packages in addition to whatever packages I install into the virtual environment. Is this correct? As an example, I've installed django through macports and then create a virtual environment, but I cannot import django from within that virtual environment: [streeter@mordecai]:~$ mkvirtualenv django-test New python executable in django-test/bin/python Installing setuptools............done. ... (django-test)[streeter@mordecai]:~$ pip install django-debug-toolbar Downloading/unpacking django-debug-toolbar Downloading django-debug-toolbar-0.8.4.tar.gz (80Kb): 80Kb downloaded Running setup.py egg_info for package django-debug-toolbar Installing collected packages: django-debug-toolbar Running setup.py install for django-debug-toolbar Successfully installed django-debug-toolbar Cleaning up... (django-test)[streeter@mordecai]:~$ python Python 2.6.1 (r261:67515, Jun 24 2010, 21:47:49) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import django Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named django >>> So I can install packages into the virtual environment, but it isn't picking up the global site-packages. Or am I not doing something correctly / missing something / misunderstanding how virtualenv works? I've got Mac OS 10.6 (Snow Leopard), have updated my macports packages and am using macports' python26 (via python_select python26).

    Read the article

  • MSSQL 2005 FOR XML

    - by Lima
    Hi, I am wanting to export data from a table to a specifically formated XML file. I am fairly new to XML files, so what I am after may be quite obvious but I just cant find what I am looking for on the net. The format of the XML results I need are: <data> <event start="May 28 2006 09:00:00 GMT" end="Jun 15 2006 09:00:00 GMT" isDuration="true" title="Writing Timeline documentation" image="http://simile.mit.edu/images/csail-logo.gif"> A few days to write some documentation </event> </data> My table structure is: name VARCHAR(50), description VARCHAR(255), startDate DATETIME, endDate DATETIME (I am not too interested in the XML fields image or isDuration at this point in time). I have tried: SELECT [name] ,[description] ,[startDate] ,[endTime] FROM [testing].[dbo].[time_timeline] FOR XML RAW('event'), ROOT('data'), type Which gives me: <data> <event name="Test1" description="Test 1 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> <event name="Test2" description="Test 2 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> </data> What I am missing, is the description needs to be outside of the event attributes, and there needs to be a tag. Is anyone able to point me in the correct direction, or point me to a tutorial or similar on how to accomplish this? Thanks, Matt

    Read the article

  • c++ i need help with this program. everytime i try to run it, i got a problem

    - by FOXMULDERIZE
    1-the program must read numeric data from a file. 2-only one line per number 3-half way between those numbers is a negative number. 4-the program must sum those who are above the negative number in a acumulator an those below the negative number in another acumulator. 5-the black screen shall print both results and determined who is grater or equal. include include using namespace std; void showvalues(int,int,int[]); void showvalues2(int,int); void sumtotal(int,int); int main() { int total1=0; int total2=0; const int SIZE_A= 9; int arreglo[SIZE_A]; int suma,total,a,b,c,d,e,f; ifstream archivo_de_entrada; archivo_de_entrada.open("numeros.txt"); //lee/// for(int count =0 ;count < SIZE_A;count++) archivo_de_entrada>>arreglo[count] ; archivo_de_entrada.close(); showvalues(0,3,arreglo); showvalues2(5,8); sumtotal(total1,total2); system("pause"); return 0; } void showvalues(int a,int b,int arreglos) { int total1=0; //muestra//////////////////////// cout<< "los num son "; for(int count = a ;count <= b;count++) total1 += arreglos[count]; cout < } void showvalues2(int c,int d) { ////////////////////////////// int total2=0; cout<< "los num 2 son "; for(count =5 ;count <=8;count++) total2 = total2 + arreglo[count]; cout < void sumtotal(int e,int f) { ///////////////////////////////// cout<<"la suma de t1 y t2 es "; total= total1 + total2; cout< }

    Read the article

  • Ajax Request using jQuery in Rails

    - by Steve
    Hi... I am sending an Ajax Request using jQuery. What happens is that I am getting an "405 Method Not Allowed" Error. I am just posting a form, which would get the detail from the form and insert it into the DB. Just the usual stuff.I am using WEBrick that comes as default with the rails package. Can somebody please tell me how to fix this. This is the code that triggers the Ajax Request $.post($(this).attr("action") + ".js",$(this).serialize(),null,"script"); Response Headers Cache-Control no-cache Allow GET, PUT, DELETE Content-Type text/html; charset=utf-8 Content-Length 9502 Server WEBrick/1.3.1 (Ruby/1.9.1/2009-12-07) Date Wed, 02 Jun 2010 20:41:33 GMT Connection Keep-Alive Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With XMLHttpRequest Referer http://localhost:3000/viewspot/3 Content-Length 141 Pragma no-cache Cache-Control no-cache

    Read the article

  • How to load mysql data on separate forms

    - by user225269
    I used this code to add the data in the mysql table. But I do not know how to separate them because they are only stored in one column: $birthday = mysql_real_escape_string($_POST['yyyy'] . '-' . $_POST['mm'] . '-' . $_POST['dd']); How do I load them in here, so that the month, day, and year will be separated: <tr> <td><font size="2">Birthday</td> <td> <select title="- Select Month -" name="mm" id="mm" class="" > <option value="" >--Month--</option> <option value="1" >Jan</option> <option value="2" >Feb</option> <option value="3" >Mar</option> <option value="4" >Apr</option> <option value="5" >May</option> <option value="6" >Jun</option> <option value="7" >Jul</option> <option value="8" >Aug</option> <option value="9" >Sep</option> <option value="10" >Oct</option> <option value="11" >Nov</option> <option value="12" >Dec</option> </select> <input title="Day" type="text" onkeypress="return isNumberKey(event)" name="dd" value="" size="1" maxlength="2" id='numbers'/ > <input title="Year" type="text" onkeypress="return isNumberKey(event)" name="yyyy" value="" size="1" maxlength="4" id='numbers'/> </td> </tr> Please help.

    Read the article

  • Automatically restarting Erlang applications

    - by Nick
    I recently ran into a bug where an entire Erlang application died, yielding a log message that looked like this: =INFO REPORT==== 11-Jun-2010::11:07:25 === application: myapp exited: shutdown type: temporary I have no idea what triggered this shutdown, but the real problem I have is that it didn't restart itself. Instead, the now-empty Erlang VM just sat there doing nothing. Now, from the research I've done, it looks like there are other "start types" you can give an application: 'transient' and 'permanent'. If I start a Supervisor within an application, I can tell it to make a particular process transient or permanent, and it will automatically restart it for me. However, according to the documentation, if I make an application transient or permanent, it doesn't restart it when it dies, but rather it kills all the other applications as well. What I really want to do is somehow tell the Erlang VM that a particular application should always be running, and if it goes down, restart it. Is this possible to do? (I'm not talking about implementing a supervisor on top of my application, because then it's a catch 22: what if my supervisor process crashes? I'm looking for some sort of API or setting that I can use to have Erlang monitor and restart my application for me.) Thanks!

    Read the article

  • How to get Amazon s3 PHP SDK working?

    - by JakeRow123
    I'm trying to set up s3 for the first time and trying to run the sample file that comes with the PHP sdk that creates a bucket and attempts to upload some demo files to it. But this is the error I am getting: The difference between the request time and the current time is too large. I read on another question on SO that this is because Amazon determines a valid request by comparing the times between the server and the client, that the 2 must be within a 15 min span of one another. Now here is the problem. My laptop's time is 12:30AM June 8, 2012 at the moment. On my server I created a file called servertime.php and placed this code in that file: <?php print strftime('%c'); ?> and the output is: Fri Jun 8 00:31:22 2012 It looks like the day is correct but I don't know what to make of 00:31:22. In any case, how is it possible to always make sure the time between the client and server is within a 15 minute window of one another. What if I have a user in China who wishes to upload a file on my site which uses s3 for the cdn. Then the time difference would be over a day. How can I make sure all my user's times are within 15 minutes of my server time? What if the user is in the U.S. but the time on their machine is misconfigured. Basically how to get s3 bucket creation and upload to work?

    Read the article

  • Error in FireFox Loading Images

    - by Brian
    Hello, Details of the app are: ASP.NET project, local web server, hosted in IIS locally, using latest FireFox, uses forms authentication. I'm getting a logon user name/pwd box when trying to access my local web server. Using the net panel in firebug, I see the issue is with an animated GIF, showing up as 401 unauthorized. I check the details and this is what I see for this URL: http://localhost/<virtual>/Images/loading.gif I'm getting the following: Response Headers: Server Microsoft-IIS/5.1 Date Thu, 17 Jun 2010 19:02:58 GMT WWW-Authenticate Negotiate NTLM Connection close Content-Length 4046 Content-Type text/html Request headers: Host localhost User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729; .NET4.0E) Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer <correct referer> Cookie ASP.NET_SessionId=ux4bt345qjz4p3u1wm1zgmez; .ASPXFormsAuth=33F12B444040827B8ABF154EE4EDE43B6CA532432EB846987B355097E00256DF0955C76A37BC593EAA961747BF1CC1D8949FF63C6F2CA69D77213EB15B4EDFAF57A83D9E1F88AB8D821C3A09C07EA2EE; .ASPROLES=qTFrGteJydYAE3118WGXbhJthTDdjdtuQ06t4bYVrM1BwIfcEHU1HhnEcs7TqSOaV-fIN5MH3uO57oNVWXDvrhkZ8gQuURuUk_K0TpoR-DEFXuF953Gl9aIilKAdV211jutMNQmhkt2rdPE2tEhHs3pz953fADxjAOyZl7K-AqNvMk3yqJshhKHhJIf-ALMhWIYlrrKy0WsYznUwh3WCtPfzEBD5XzmXU8HVMJ2-ArLjBISuegvSmxvK1PuXBPhoMRMi9Ynaw6xi9ypGk-R6uN0ljOMCGkB2-20WUlFuP0xWTfac_zCTDT00pbpnyjtygnM-LShOXTrZ_mhoRuXfKYEYSodNihwD6SRr19Nm-8uZ5BQ-W81svM17S2C0vc0FaxtiuAcN_vHcsN1OEJeCuVfRjeqzo9xWEViupP3Vh6aOcCm6yrftgw5x94piuCJO7tCfXjJAw5RVUWDBBWv5gmid171F0k-_XZ0CSv7Gm2Eai1BRfogAqQ_MV3tyPv7XVEyJXRXqYGlf1JpkfTW8S8On4E05v9gx9RcdnKHZebiOZwbP1_ho9nG7pMwXysbhjxtxwZ-zLx-v11_rhZw_i5m7iNcLtt4BbFU-sb_crzMpCKGywHIc452Zp1E0kx1Rfx-2eUnaiLiCfGed-QqelO88NYTpJHttGKEfhFrDgmaIXZPJRtuZ-GrS6t3Vla-8qDAVb1p6ovPwoVT4z4BhQyFsk542gDx-uQDw6D0B6zo7lXfcOjtolUxDcLbETsNlYsexZaxFpRSbw7M1ldwL_k92P9wLPlv9mw4NtyhXKJesMu7GjquZuoBN3hO00AqJEe1tKFFtfrvbE5ZH7uNu7myNdtlxRPe3WZe7qukbqHo1 Pragma no-cache Cache-Control no-cache Any ideas? Thanks.

    Read the article

  • Rails 2.x provide meaningful error message with http basic authentication

    - by randombits
    I'm using basic http authentication in my rails app via the following code: class ApplicationController < ActionController::Base helper :all # include all helpers, all the time before_filter :authenticate private def authenticate authenticate_or_request_with_http_basic do |username, password| if username.nil? || password.nil? render :inline => %(xml.instruct! :xml, :version => "1.0", :encoding => "UTF-8" xml.errors do xml.error('Could not authenticate you.') end), :type => :builder, :status => 401 end end end end The problem is, if you do a curl http://127.0.0.1:3000/foo/1.xml without providing the -u username:password flag, you get a dead beat response like this: HTTP/1.1 401 Unauthorized Cache-Control: no-cache WWW-Authenticate: Basic realm="Foo" X-Runtime: 1 Content-Type: text/html; charset=utf-8 Content-Length: 27 Server: WEBrick/1.3.1 (Ruby/1.9.1/2010-01-10) Date: Thu, 03 Jun 2010 03:09:18 GMT Connection: Keep-Alive HTTP Basic: Access denied. Is it possible at all to render the inline XML I have above in the event a username and password is not provided by the user?

    Read the article

  • Parsing large txt files in ruby taking a lot of time?

    - by hershey92
    below is the code to download a txt file from internet approx 9000 lines and populate the database, I have tried a lot but it takes a lot of time more than 7 minutes. I am using win 7 64 bit and ruby 1.9.3. Is there a way to do it faster ?? require 'open-uri' require 'dbi' dbh = DBI.connect("DBI:Mysql:mfmodel:localhost","root","") #file = open('http://www.amfiindia.com/spages/NAV0.txt') file = File.open('test.txt','r') lines = file.lines 2.times { lines.next } curSubType = '' curType = '' curCompName = '' lines.each do |line| line.strip! if line[-1] == ')' curType,curSubType = line.split('(') curSubType.chop! elsif line[-4..-1] == 'Fund' curCompName = line.split(" Mutual Fund")[0] elsif line == '' next else sCode,isin_div,isin_re,sName,nav,rePrice,salePrice,date = line.split(';') sCode = Integer(sCode) sth = dbh.prepare "call mfmodel.populate(?,?,?,?,?,?,?)" sth.execute curCompName,curSubType,curType,sCode,isin_div,isin_re,sName end end dbh.do "commit" dbh.disconnect file.close 106799;-;-;HDFC ARBITRAGE FUND RETAIL PLAN DIVIDEND OPTION;10.352;10.3;10.352;29-Jun-2012 This is the format of data to be inserted in the table. Now there are 8000 such lines and how can I do an insert by combining all that and call the procedure just once. Also, does mysql support arrays and iteration to do such a thing inside the routine. Please give your suggestions.Thanks.

    Read the article

  • How can I make "month" columns in Sql?

    - by Beska
    I've got a set of data that looks something like this (VERY simplified): productId Qty dateOrdered --------- --- ----------- 1 2 10/10/2008 1 1 11/10/2008 1 2 10/10/2009 2 3 10/12/2009 1 1 10/15/2009 2 2 11/15/2009 Out of this, we're trying to create a query to get something like: productId Year Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec --------- ---- --- --- --- --- --- --- --- --- --- --- --- --- 1 2008 0 0 0 0 0 0 0 0 0 2 1 0 1 2009 0 0 0 0 0 0 0 0 0 3 0 0 2 2009 0 0 0 0 0 0 0 0 0 3 2 0 The way I'm doing this now, I'm doing 12 selects, one for each month, and putting those in temp tables. I then do a giant join. Everything works, but this guy is dog slow. I know this isn't much to go on, but knowing that I barely qualify as a tyro in the db world, I'm wondering if there is a better high level approach to this that I might try. (I'm guessing there is.) (I'm using MS Sql Server, so answers that are specific to that DB are fine.) (I'm just starting to look at "PIVOT" as a possible help, but I don't know anything about it yet, so if someone wants to comment about that, that might be helpful as well.)

    Read the article

  • How to add "missing" columns in a column group in reporting services?

    - by Gimly
    Hello, I'm trying to create a report that displays for each months of the year the quantity of goods sold. I have a query that returns the list of goods sold for each month, it looks something like this : SELECT Seller.FirstName, Seller.LastName, SellingHistory.Month, SUM(SellingHistory.QuantitySold) FROM SellingHistory JOIN Seller on SellingHistory.SellerId = Seller.SellerId WHERE SellingHistory.Year = @Year GOUP BY Seller.FirstName, Seller.LastName, SellingHistory.Month What I want to do is display a report that has a column for each months + a total column that will display for each Seller the quantity sold in the selected month. Seller Name | Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Total What I managed to do is using a matrix and a column group (group on Month) to display the columns for existing data, if I have data from January to March, it will display the 3 first columns and the total. What I would like to do is always display all the columns. I thought about making that by adding the missing months in the SQL request, but I find that a bit weird and I'm sure there must be some "cleanest" solution as this is something that must be quite frequent. Thanks. PS: I'm using SQL Server Express 2008

    Read the article

  • Time.new does not work as I would expect

    - by Marius Pop
    I am trying to generate some seed material. seed_array.each do |seed| Task.create(date: Date.new(2012,06,seed[1]), start_t: Time.new(2012,6,2,seed[2],seed[3]), end_t: Time.new(2012,6,2,seed[2] + 2,seed[3]), title: "#{seed[0]}") end Ultimately I will put random hours, minutes, seconds. The problem that I am facing is that instead of creating a time with the 2012-06-02 date it creates a time with a different date: 2000-01-01. I tested Time.new(2012,6,2,2,20,45) in rails console and it works as expected. When I am trying to seed my database however some voodo magic happens and I don't get the date I want. Any inputs are appreciated. Thank you! Update1: * [1m[36m (0.0ms)[0m [1mbegin transaction[0m [1m[35mSQL (0.5ms)[0m INSERT INTO "tasks" ("created_at", "date", "description", "end_t", "group_id", "start_t", "title", "updated_at") VALUES (?, ?, ?, ?, ?, ?, ?, ?) [["created_at", Tue, 03 Jul 2012 02:15:34 UTC +00:00], ["date", Thu, 07 Jun 2012], ["description", nil], ["end_t", 2012-06-02 10:02:00 -0400], ["group_id", nil], ["start_t", 2012-06-02 08:02:00 -0400], ["title", "99"], ["updated_at", Tue, 03 Jul 2012 02:15:34 UTC +00:00]] [1m[36m (2.3ms)[0m [1mcommit transaction * This is a small sample of the log. Update 2 Task id: 101, date: "2012-06-26", start_t: "2000-01-01 08:45:00", end_t: "2000-01-01 10:45:00", title: "1", description: nil, group_id: nil, created_at: "2012-07-03 02:15:33", updated_at: "2012-07-03 02:15:33" This is what shows up in rails console.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Ubuntu 12.04 lightdm dumps to tty. Cannot start GUI interface when booting off harddrive, but can when booting off usb

    - by user72681
    When booting, lightdm dumps to tty. No GUI interface works- this is after a fresh install of Ubuntu 12.04 where the GUI interface works when running off the USB. I have an NVIDIA Corporation G98 [Quadro NVS 420] graphics card. After I call startx from the terminal it still doesn't work. I get the following in the Xorg.0.log: [ 327.718] (--) NVIDIA(0): Memory: 262144 kBytes [ 327.718] (--) NVIDIA(0): VideoBIOS: 62.98.6f.00.07 [ 327.718] (II) NVIDIA(0): Detected PCI Express Link width: 16X [ 327.718] (--) NVIDIA(0): Interlaced video modes are supported on this GPU [ 327.756] (--) NVIDIA(0): Connected display device(s) on Quadro NVS 420 at PCI:3:0:0 [ 327.756] (--) NVIDIA(0): none [ 327.756] (EE) NVIDIA(0): No display devices found for this X screen. [ 328.010] (II) UnloadModule: "nvidia" [ 328.010] (II) Unloading nvidia [ 328.010] (II) UnloadModule: "wfb" [ 328.010] (II) Unloading wfb [ 328.010] (II) UnloadModule: "fb" [ 328.010] (II) Unloading fb [ 328.011] (EE) Screen(s) found, but none have a usable configuration. [ 328.011] Fatal server error: [ 328.011] no screens found /var/log/lightdm/lightdm.log [+0.00s] DEBUG: Starting local X display [+0.00s] DEBUG: X server :0 will replace Plymouth [+0.02s] DEBUG: Using VT 7 [+0.02s] DEBUG: Activating VT 7 [+0.02s] DEBUG: Logging to /var/log/lightdm/x-0.log [+0.02s] DEBUG: Writing X server authority to /var/run/lightdm/root/:0 [+0.02s] DEBUG: Launching X Server [+0.02s] DEBUG: Launching process 1074: /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none [+0.02s] DEBUG: Waiting for ready signal from X server :0 [+0.02s] DEBUG: Acquired bus name org.freedesktop.DisplayManager [+0.02s] DEBUG: Registering seat with bus path /org/freedesktop/DisplayManager/Seat0 [+1.38s] DEBUG: Process 1074 exited with return value 1 [+1.38s] DEBUG: X server stopped [+1.38s] DEBUG: Removing X server authority /var/run/lightdm/root/:0 [+1.38s] DEBUG: Releasing VT 7 [+1.38s] DEBUG: Stopping Plymouth, X server failed to start [+1.39s] DEBUG: Display server stopped [+1.39s] DEBUG: Stopping display [+1.39s] DEBUG: Display stopped [+1.39s] DEBUG: Stopping X local seat, failed to start a display [+1.39s] DEBUG: Stopping seat [+1.39s] DEBUG: Seat stopped [+1.39s] DEBUG: Required seat has stopped [+1.39s] DEBUG: Stopping display manager [+1.39s] DEBUG: Display manager stopped [+1.39s] DEBUG: Stopping daemon [+1.39s] DEBUG: Exiting with return value 1 /var/log/lightdm/x-0.log X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-31-server x86_64 Ubuntu Current Operating System: Linux oorn 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-23-generic root=UUID=b25ab072-077d-40f1-95a4-c7fd66acd2f0 ro reboot=pci quiet splash vt.handoff=7 Build Date: 07 May 2012 11:43:21PM xorg-server 2:1.11.4-0ubuntu10.2 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.24.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Wed Jun 27 12:51:45 2012 (==) Using config file: "/etc/X11/xorg.conf" (==) Using system config directory "/usr/share/X11/xorg.conf.d" (EE) NVIDIA(0): No display devices found for this X screen. (EE) Screen(s) found, but none have a usable configuration. Fatal server error: no screens found Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log Server terminated with error (1). Closing log file.

    Read the article

  • Ubuntu 12.04 LTS initramfs-tools dependency issue

    - by Mike
    I know this has been asked several times, but each issue and resolution seems different. I've tried almost everything I could think of, but I can't fix this. I have a VM (VMware I think) running 12.04.03 LTS which has stuck dependencies. The VM is on a rented host, running a live system so I don't want to break it (further). uname -a Linux support 3.5.0-36-generic #57~precise1-Ubuntu SMP Thu Jun 20 18:21:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Some more: sudo apt-get update [sudo] password for tracker: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. initramfs-tools : Depends: initramfs-tools-bin (< 0.99ubuntu13.1.1~) but 0.99ubuntu13.3 is installed E: Unmet dependencies. Try using -f. sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: initramfs-tools The following packages will be upgraded: initramfs-tools 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. Need to get 0 B/50.3 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.1.1~); however: Version of initramfs-tools-bin on system is 0.99ubuntu13.3. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. dpkg: dependency problems prevent configuration of apparmor: apparmor depends on initramfs-tools; however: Package initramfs-tools is not configured yet. dpkg: error processing apparmor (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: initramfs-tools apparmor E: Sub-process /usr/bin/dpkg returned an error code (1) If I look at the policy behind initramfs-tools / bin I get: apt-cache policy initramfs-tools initramfs-tools: Installed: 0.99ubuntu13.1 Candidate: 0.99ubuntu13.3 Version table: 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages *** 0.99ubuntu13.1 0 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages apt-cache policy initramfs-tools-bin initramfs-tools-bin: Installed: 0.99ubuntu13.3 Candidate: 0.99ubuntu13.3 Version table: *** 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages So the issue seems to be I have 0.99ubuntu13.3 for initramfs-tools-bin yet 0.99ubuntu13.1 for initramfs-tools, and can't upgrade to 0.99ubuntu13.3. I've performed apt-get clean/autoclean/install -f/upgrade -f many times but they won't resolve. I can think of only 2 other 'solutions': Edit the dpkg dependency list to trick it into doing the installation with a broken dependency. This seems very dodgy and it would be a last resort Downgrade both initramfs-tools and initramfs-tools-bin to 0.99ubuntu13 from the precise/main sources and hope that would get them in step. However I'm not sure if this will be possible, or whether it would introduce more issues. I'm not sure how this situation arise in the first place. /boot was 96% full; it's now 56% full (it's tiny - 64MB ... this is what I got from the hosting company). Can anyone offer advice please?

    Read the article

  • Oracle ties social, CRM, analytics products to customer experience

    - by Richard Lefebvre
    Oracle will embark on a new product strategy that centers on customer experience management, an approach driven by the company’s many recent acquisitions.  The new approach, announced by the company Monday night, will be seen in an expansive suite that features familiar Oracle products -- such as its Fusion CRM platform -- and offerings the company recently gained through acquisitions, including FatWire, RightNow and Vitrue. Billed as Oracle Customer Experience (CX), the suite enables businesses to respond to a market centered on the customer experience, said Anthony Lye, the company’s senior vice president of CRM. Companies “are very aware their products are commoditizing,” Lye said in an interview last week, referring to how the Web and social media channels have empowered customers. Customer experiences start and mature outside of CRM, and applications today need to reflect that shift, Lye said. Businesses thus need to step away from a pure CRM model, he said. Oracle claims CX will improve customer experience management by connecting businesses with customers across Web sites and social channels. Companies can create a single, real-time view of the customer and use predictive analytics of interactions to strengthen the customer experience, Oracle said. “Companies have to connect with their customers wherever, whenever and however they want,” Lye said. “They have to know and understand their customer.” Lye promoted Oracle CX as a suite that will work across channels to complement the company’s applications. A new strategy has been “cooking” for years now, but the acquisitions Oracle has made over the past two years made the time right for a “unique collaboration,” Lye said. CX includes basic Oracle CRM solutions such as Siebel and the new Fusion Apps. It also includes the company’s MDM products, Enterprise Data Quality, Customer Hub and Product Hub. And the suite is rounded out by the services that Oracle recently bought, transactions that created or enhanced the company’s presence in social, marketing, e-commerce and customer service. For instance, FatWire provides tools for marketing. ATG focuses on e-commerce. And RightNow specializes in customer service. Two recent acquisitions -- Collective Intellect and Vitrue -- gave Oracle a seat at the social table. Collective Intellect is a social intelligence program, and Vitrue is a social marketing and engagement platform. Those acquisitions have yet to be finalized. Oracle hopes to eventually integrate the two social offerings, as well as most of the other services, into the CX suite. CX can integrate on Oracle’s standard middleware, and can give users a lower TCO by leveraging it as a single stack on premise or as a cloud solution. Lye deferred questions about the pricing of CX, and instead pitched Oracle’s ability to offer multiple customer experience solutions in one suite. Businesses have struggled with the complexity of infrastructure and modern services that communicate with customers, Lye said. “They’ve struggled to pull all these things together. We’ve done that,” he said. Stephen Powers, a research director at Forrester Research Inc. in Cambridge, Mass., said it’s not surprising for Oracle to offer the CX suite and a related customer experience strategy.  “They’ve got CRM, ATG, FatWire. Clearly, it’s been the strategy for them,” he said. But the challenge for Oracle, and for any other vendor that has gone on an “acquisition spree,” is to connect its many products, Powers said. “The portfolio has to be more than the parts. They’ve got to realize the efficiencies and value of having these pieces to tie them together,” he said. “The proof is in the pudding. Adobe has done a nice job in its space with the products they’ve got. Now, Oracle has got to show it has something.” Albert McKeon (SearchCRM) Published: 25 Jun 2012 : http://searchcrm.techtarget.com/news/2240158644/Oracle-ties-social-CRM-analytics-products-to-customer-experience

    Read the article

  • Need help with chronic slow wifi connections

    - by mgeorge
    I have had chronic slow wifi connections on several networks for a while now, I think since my update to 12.04 when it came out. I have tried many of the tips and tricks already available out there in the forums with no luck (wicd, etc..). I want to see if any of you experts out there might be able to help me, and thanks in advance!! I use ubuntu 12.04 on a lenovo ideapad y650, and most networks I connect to lose the connection frequently or do not give appropriate bandwidth when I am connected. Here are some results of the usual go-to system checks: cat /etc/lsb-release; uname -a: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" Linux mgeorge-lenovo 3.2.0-48-generic-pae #74-Ubuntu SMP Thu Jun 6 20:05:01 UTC 2013 i686 i686 i386 GNU/Linux lspci -nnk | grep -iA2 net: 04:00.0 Network controller [0280]: Intel Corporation PRO/Wireless 5100 AGN [Shiloh] Network Connection [8086:4237] Subsystem: Intel Corporation WiFi Link 5100 AGN [8086:1211] Kernel driver in use: iwlwifi -- 08:00.0 Ethernet controller [0200]: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe [14e4:1698] (rev 10) Subsystem: Lenovo Device [17aa:3878] Kernel driver in use: tg3 lsusb: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 090c:7371 Silicon Motion, Inc. - Taiwan (formerly Feiya Technology Corp.) iwconfig: lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"Dyno" Mode:Managed Frequency:2.412 GHz Access Point: CC:5D:4E:46:0A:93 Bit Rate=150 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-40 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:948 Invalid misc:728 Missed beacon:0 eth0 no wireless extensions. rfkill list all: 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: ideapad_bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no lsmod: Module Size Used by uvcvideo 67203 0 videodev 86588 1 uvcvideo nouveau 712674 3 ttm 65344 1 nouveau drm_kms_helper 45466 1 nouveau drm 197641 5 nouveau,ttm,drm_kms_helper i2c_algo_bit 13199 1 nouveau mxm_wmi 12893 1 nouveau wmi 18744 1 mxm_wmi joydev 17393 0 arc4 12473 2 snd_hda_codec_realtek 174313 1 snd_hda_codec_hdmi 31775 1 snd_hda_intel 32719 3 snd_hda_codec 109562 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec snd_pcm 80916 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 snd_rawmidi 25424 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi psmouse 86520 0 snd_seq 51592 2 snd_seq_midi,snd_seq_midi_event serio_raw 13027 0 snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq iwlwifi 366509 0 mac80211 436493 1 iwlwifi snd 62218 16 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,sn d_rawmidi,snd_seq,snd_timer,snd_seq_device ideapad_laptop 17890 0 sparse_keymap 13658 1 ideapad_laptop cfg80211 178877 2 iwlwifi,mac80211 ir_lirc_codec 12739 0 lirc_dev 18700 1 ir_lirc_codec soundcore 14635 1 snd snd_page_alloc 14108 2 snd_hda_intel,snd_pcm ir_mce_kbd_decoder 12681 0 ir_sony_decoder 12462 0 ir_jvc_decoder 12459 0 ir_rc6_decoder 12459 0 ir_rc5_decoder 12459 0 rc_rc6_mce 12454 0 ir_nec_decoder 12459 0 video 19115 1 nouveau ene_ir 18019 0 rc_core 21263 10 ir_lirc_codec,ir_mce_kbd_decoder,ir_sony_decoder,ir_jvc_decoder,ir_rc6_decoder,ir_rc5_decoder,rc_rc6_mce,ir_nec_decoder,ene_ir bnep 17830 2 rfcomm 38139 0 parport_pc 32114 0 bluetooth 158479 10 bnep,rfcomm ppdev 12849 0 binfmt_misc 17292 1 mac_hid 13077 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp tg3 141414 0 nm-tool: NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: tg3 State: unavailable Default: no HW Address: 00:23:5A:CC:85:BD Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 [Dyno] -------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: connected Default: yes HW Address: 00:22:FA:D0:94:CA Capabilities: Speed: 150 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) *Dyno: Infra, CC:5D:4E:46:0A:93, Freq 2412 MHz, Rate 54 Mb/s, Strength 81 WPA2 IPv4 Settings: Address: 10.0.0.43 Prefix: 24 (255.255.255.0) Gateway: 10.0.0.1 DNS: 10.0.0.1

    Read the article

  • lvm disappeared after disc replacement on raid10

    - by user142295
    here my problem: I am running ubuntu 12.04 on a raid10 (4 disks), on top of which I installed an lvm with two volume groups (one for /, one for /home). The layout of the disks are as follows: Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003f3b6 Device Boot Start End Blocks Id System /dev/sda1 * 63 481949 240943+ 83 Linux /dev/sda2 481950 2910640634 1455079342+ fd Linux raid autodetect /dev/sda3 2910640635 2930272064 9815715 82 Linux swap / Solaris Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00069785 Device Boot Start End Blocks Id System /dev/sdb1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdb2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdc2 2910158685 2930272064 10056690 82 Linux swap / Solaris Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f14de Device Boot Start End Blocks Id System /dev/sdd1 63 2910158684 1455079311 fd Linux raid autodetect /dev/sdd2 2910158685 2930272064 10056690 82 Linux swap / Solaris The first disk (/dev/sda) contains the /boot partition on /dev/sda1. I use grub2 to boot the system off this partition. On top of this raid10 I installed two volume groups, one for /, one for /home. This system worked well, I even exchanged two disks during the last two years. It always worked. But not this time. For the first time, /dev/sda broke. I do not know if this is an issue – I know I would have struggled anyways to overcome the problem with /boot installed on that disk and grub2 installed on the mbr of /dev/sda. Anyways, I did what I always did: start knoppix fire up the raid sudo mdadm --examine -scan which returns ARRAY /dev/md127 UUID=0dbf4558:1a943464:132783e8:19cdff95 start it up sudo mdadm --assemble /dev/md127 fail the failing disk (smart event) sudo mdadm /dev/md127 --fail /dev/sda2 remove the failing disk sudo mdadm /dev/md127 --remove /dev/sda2 stop the raid sudo mdadm -S /dev/md127 take out the disk replace it with a new one create the same partitions as on the failling one add it to the raid sudo mdadm --assemble /dev/md127 sudo mdadm /dev/md127 --add /dev/sda2 wait 4 hours All looks fine: cat /proc/mdstat returns: Personalities : [raid10] md127 : active raid10 sda2[0] sdd1[3] sdc1[2] sdb1[1] 2910158464 blocks 64K chunks 2 near-copies [4/4] [UUUU] unused devices: <none> and sudo mdadm --detail /dev/md127 returns /dev/md127: Version : 0.90 Creation Time : Wed Jun 10 13:08:46 2009 Raid Level : raid10 Array Size : 2910158464 (2775.34 GiB 2980.00 GB) Used Dev Size : 1455079232 (1387.67 GiB 1490.00 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 127 Persistence : Superblock is persistent Update Time : Thu Mar 21 16:27:40 2013 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 64K UUID : 0dbf4558:1a943464:132783e8:19cdff95 (local to host Microknoppix) Events : 0.4824680 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 However, there is no trace of the volume groups. Rebooting into knoppix does not help Restarting the old system (I actually replugged and re-added the failing disk for that – the system begins to start, but then fails to see the / partition – no wonder if the volume group is gone) does not help. sudo vgscan, sudo vgdisplay, sudo lvs, sudo lvdisplay, sudo vgscan –mknodes all returned No volume groups found. I am completely at a loss. Can anyone tell me if and how I can recover my data? Thanks in advance!

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • Add/ End Date Responsibility For Oracle FND User

    - by PRajkumar
    API - fnd_user_pkg.addresp Example -- -- ------------------------------------------------------------------------- -- Add/ End Date Responsibility to Oracle FND User -- ------------------------------------------------------------------------- DECLARE     lc_user_name                        VARCHAR2(100)    := 'PRAJ_TEST';     lc_resp_appl_short_name   VARCHAR2(100)    := 'FND';     lc_responsibility_key          VARCHAR2(100)    := 'APPLICATION_DEVELOPER';     lc_security_group_key        VARCHAR2(100)    := 'STANDARD';     ld_resp_start_date                DATE                        := TO_DATE('25-JUN-2012');     ld_resp_end_date                 DATE                        := NULL; BEGIN      fnd_user_pkg.addresp      (   username           => lc_user_name,         resp_app             => lc_resp_appl_short_name,         resp_key             => lc_responsibility_key,         security_group  => lc_security_group_key,         description         => NULL,         start_date           => ld_resp_start_date,         end_date            => ld_resp_end_date     );  COMMIT; EXCEPTION             WHEN OTHERS THEN                         ROLLBACK;                         DBMS_OUTPUT.PUT_LINE(SQLERRM); END; / SHOW ERR;  

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56  | Next Page >