Search Results

Search found 6479 results on 260 pages for 'audio analysis'.

Page 138/260 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Exponential regression : p-value and F significance

    - by Saravanan K
    I am new to statistics. I have a set of independent data and dependent data (X,Y), where I would like to do an exponential regression to obtain its p-value and significant F (already obtained R2 and also the coefficients through mathematical calculation). What is the natural evolution from the (X,Y) data to mathematically calculate those variables. Spent a week on the internet to study this but unable to find the right answer. Often an exponential data, y=be^(mx) will be converted first to a linear data, ln y = mx + ln b . Then a linear regression will done on the converted data, obtaining its p-value etc. Assume we use a statistical tool such as Excel's Analysis ToolPak: Data Analysis : Regression, it will produce a result such as below, I believe the p-value and Significant F value is representing the converted linear data and not the original exponential data. Questions: What is the approach/steps used by Excel to get the p-value and Significant F value for the converted linear data as shown in the statistic output in the image above? It is not clear in their help page or website. Can the p-value and Significant F could be mathematically calculated for exponential regression without using a statistical tool? Can you assist to point me to the right link if this has been answered before.

    Read the article

  • What is the difference between cubes and the Unified Dimensional Model (if any)?

    - by ngm
    I'm currently researching SQL Server 2008 as a business intelligence solution, and currently looking at Analysis Services (and I'm pretty new to business intelligence as a whole...) I'm a bit confused by some of the terms in SSAS, particularly the conceptual differences between cubes and MS's Unified Dimensional Model. I believe that a cube in SSAS is basically an OLAP cube -- dimensions, measures, something that sits between the underlying data source and a business user. But then that's kind of what I understand UDM to be as well. The docs for SQL Server 2005 seem to suggest as much: "A cube is essentially synonymous with a Unified Dimensional Model (UDM)". But then the SQL Server 2008 pages sort of suggest that UDM is a wrapper for both multidimensional data (cubes) and relational data: "Use the Unified Dimensional Model to provide one consolidated business view for relational and multidimensional data that includes business entities, business logic, calculations, and metrics." This blog post suggests similarly: "UDM provides a single dimensional model for all OLAP analysis and relational reporting needs. So you can use either MDX or SQL" Is UDM something that sits above cubes? Or are they the same thing? I presume I would develop cubes with the Cube Designer application; what would I develop a UDM with?

    Read the article

  • Scripts to parse and download iTunes Connect and AppStore data

    - by bradhouse
    I'm looking for recommendations of a script or series of scripts that download and parse iTunes Connect sales data and AppStore comments, ratings and rankings data for a defined app. I'm also aware of solutions like: AppViz appsales-mobile iphone-stats Heartbeat.app I'm sure I'll find a few more with more searching. I can't help but feel there must be a really decent set of open source scripts out there to do this, given how many developers are now writing apps for the AppStore. Would be interested to hear any commercial offerings as well (although my personal preference is for open source, so I can at least see what it is doing with my iTunes Connect login credentials). To be clear, I'm really looking for something that hits all of the areas mentioned: App Store (per store) Comments Ratings Category/store rankings iTunes Connect The contents of the sales reports Analysis/graphs of the data is not necessary (but would be a nice to have I guess). I'm not really looking for something like AppSales Mobile above, I would like the raw data so I can do my own analysis and formatting. So far it looks like AppViz (listed above) is the best out there. Any suggestions on what is good/available or should I just go roll my own?

    Read the article

  • Is there a disassembler + debugger for java (ala OllyDbg / SoftICE for assembler)?

    - by Ran Biron
    Is there a utility similar to OllyDbg / SoftICE for java? I.e. execute class (from jar / with class path) and, without source code, show the disassembly of the intermediate code with ability to step through / step over / search for references / edit specific intermediate code in memory / apply edit to file... If not, is it even possible to write something like this (assuming we're willing to live without hotspot for the debug duration)? Edit: I'm not talking about JAD or JD or Cavaj. These are fine decompilers, but I don't want a decompiler for several reasons, most notable is that their output is incorrect (at best, sometimes just plain wrong). I'm not looking for a magical "compiled bytes to java code" - I want to see the actual bytes that are about to be executed. Also, I'd like the ability to change those bytes (just like in an assembly debugger) and, hopefully, write the changed part back to the class file. Edit2: I know javap exists - but it does only one way (and without any sort of analysis). Example (code taken from the vmspec documentation): From java code, we use "javac" to compile this: void setIt(int value) { i = value; } int getIt() { return i; } to a java .class file. Using javap -c I can get this output: Method void setIt(int) 0 aload_0 1 iload_1 2 putfield #4 5 return Method int getIt() 0 aload_0 1 getfield #4 4 ireturn This is OK for the disassembly part (not really good without analysis - "field #4 is Example.i"), but I can't find the two other "tools": A debugger that goes over the instructions themselves (with stack, memory dumps, etc), allowing me to examine the actual code and environment. A way to reverse the process - edit the disassembled code and recreate the .class file (with the edited code).

    Read the article

  • deadlock because of foreign key?

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. I met with deadlock in the following store procedure, but because of my fault, I did not record the deadlock graph. But now I can not reproduce deadlock issue. I want to have a postmortem to find the root cause of deadlock to avoid deadlock in the future. The deadlock happens on delete statement. For the delete statement, Param1 is a column of table FooTable, Param1 is a foreign key of another table (refers to another primary key clustered index column of the other table). There is no index on Param1 itself for table FooTable. FooTable has another column which is used as clustered primary key, but not Param1 column. Here is my guess why there is deadlock, and I want to let people review whether my analysis is correct? Since Param1 column has no index, there will be a table scan, and will acquire table level lock, because of foreign key, the delete operation will also need to check master table (e.g. to acquire lock on master table); Some operation on master table acquires master table lock, but want to acquire lock on FooTable; (1) and (2) cause cycle lock which makes deadlock happen. My analysis correct? Any reproduce scenario? create PROCEDURE [dbo].[FooProc] ( @Param1 int ,@Param2 int ,@Param3 int ) AS DELETE FooTable WHERE Param1 = @Param1 INSERT INTO FooTable ( Param1 ,Param2 ,Param3 ) VALUES ( @Param1 ,@Param2 ,@Param3 ) DECLARE @ID bigint SET @ID = ISNULL(@@Identity,-1) IF @ID > 0 BEGIN SELECT IdentityStr FROM FooTable WHERE ID = @ID END thanks in advance, George

    Read the article

  • Getting started with massive data

    - by Max
    I'm a math guy and occasionally do some statistics/machine learning analysis consulting projects on the side. The data I have access to are usually on the smaller side, at most a couple hundred of megabytes (and almost always far less), but I want to learn more about handling and analyzing data on the gigabyte/terabyte scale. What do I need to know and what are some good resources to learn from? Hadoop/MapReduce is one obvious start. Is there a particular programming language I should pick up? (I primarily work now in Python, Ruby, R, and occasionally Java, but it seems like C and Clojure are often used for large-scale data analysis?) I'm not really familiar with the whole NoSQL movement, except that it's associated with big data. What's a good place to learn about it, and is there a particular implementation (Cassandra, CouchDB, etc.) I should get familiar with? Where can I learn about applying machine learning algorithms to huge amounts of data? My math background is mostly on the theory side, definitely not on the numerical or approximation side, and I'm guessing most of the standard ML algorithms don't really scale. Any other suggestions on things to learn would be great!

    Read the article

  • Is this a good job description? What title would you give this position?

    - by Zack Peterson
    Department: Information Technology Reports To: Chief Information Officer Purpose: Company's ________________ is specifically engaged in the development of World Wide Web applications and distributed network applications. This person is concerned with all facets of the software development process and specializes in software product management. He or she contributes to projects in an application architect role and also performs individual programming tasks. Essential Duties & Responsibilities: This person is involved in all aspects of the software development process such as: Participation in software product definitions, including requirements analysis and specification Development and refinement of simulations or prototypes to confirm requirements Feasibility and cost-benefit analysis, including the choice of architecture and framework Application and database design Implementation (e.g. installation, configuration, customization, integration, data migration) Authoring of documentation needed by users and partners Testing, including defining/supporting acceptance testing and gathering feedback from pre-release testers Participation in software release and post-release activities, including support for product launch evangelism (e.g. developing demonstrations and/or samples) and subsequent product build/release cycles Maintenance Qualifications: Bachelor's degree in computer science or software engineering Several years of professional programming experience Proficiency in the general technology of the World Wide Web: Hypertext Transfer Protocol (HTTP) Hypertext Markup Language (HTML) JavaScript Cascading Style Sheets (CSS) Proficiency in the following principles, practices, and techniques: Accessibility Interoperability Usability Security (especially prevention of SQL injection and cross-site scripting (XSS) attacks) Object-oriented programming (e.g. encapsulation, inheritance, modularity, polymorphism, etc.) Relational database design (e.g. normalization, orthogonality) Search engine optimization (SEO) Asynchronous JavaScript and XML (AJAX) Proficiency in the following specific technologies utilized by Company: C# or Visual Basic .NET ADO.NET (including ADO.NET Entity Framework) ASP.NET (including ASP.NET MVC Framework) Windows Presentation Foundation (WPF) Language Integrated Query (LINQ) Extensible Application Markup Language (XAML) jQuery Transact-SQL (T-SQL) Microsoft Visual Studio Microsoft Internet Information Services (IIS) Microsoft SQL Server Adobe Photoshop

    Read the article

  • Simplest way to automatically alter "const" value in Java during compile time

    - by Michael Mao
    Hi all: This is a question corresponds to my uni assignment so I am very sorry to say I cannot adopt any one of the following best practices in a short time -- you know -- assignment is due tomorrow :( link to Best way to alter const in Java on StackOverflow Basically the only task (I hope so) left for me is the performance tuning. I've got a bunch of predefined "const" values in my single-class agent source code like this: //static final values private static final long FT_THRESHOLD = 400; private static final long FT_THRESHOLD_MARGIN = 50; private static final long FT_SMOOTH_PRICE_IDICATOR = 20; private static final long FT_BUY_PRICE_MODIFIER = 0; private static final long FT_LAST_ROUNDS_STARTTIME = 90; private static final long FT_AMOUNT_ADJUST_MODIFIER = 5; private static final long FT_HISTORY_PIRCES_LENGTH = 10; private static final long FT_TRACK_DURATION = 5; private static final int MAX_BED_BID_NUM_PER_AUC = 12; I can definitely alter the values manually and then compile the code to give it another go around. But the execution time for a thorough "statistic analysis" usually requires over 2000 times of execution, which will typically lasts more than half an hour on my own laptop... So I hope there is a way to alter values using other ways than dig into the source code to change the "const" values there, so I can automatically distributed compiled code to other people's PC and let them run the statistic analysis instead. One other reason for a automatically value adjustment is that I can try using my own agent to defeat itself by choosing different "const" values. Although my values are derived from previous history and statistical results, they are far from the most optimized values. I hope there is a easy way so I can quickly adopt that so to have a good sleep tonight while the computer does everything for me... :) Any hints on this sort of stuff? Any suggestion is welcomed and much appreciated.

    Read the article

  • R: Are there any alternatives to loops for subsetting from an optimization standpoint?

    - by Adam
    A recurring analysis paradigm I encounter in my research is the need to subset based on all different group id values, performing statistical analysis on each group in turn, and putting the results in an output matrix for further processing/summarizing. How I typically do this in R is something like the following: data.mat <- read.csv("...") groupids <- unique(data.mat$ID) #Assume there are then 100 unique groups results <- matrix(rep("NA",300),ncol=3,nrow=100) for(i in 1:100) { tempmat <- subset(data.mat,ID==groupids[i]) #Run various stats on tempmat (correlations, regressions, etc), checking to #make sure this specific group doesn't have NAs in the variables I'm using #and assign results to x, y, and z, for example. results[i,1] <- x results[i,2] <- y results[i,3] <- z } This ends up working for me, but depending on the size of the data and the number of groups I'm working with, this can take up to three days. Besides branching out into parallel processing, is there any "trick" for making something like this run faster? For instance, converting the loops into something else (something like an apply with a function containing the stats I want to run inside the loop), or eliminating the need to actually assign the subset of data to a variable?

    Read the article

  • Can ScalaCheck/Specs warnings safely be ignored when using SBT with ScalaTest?

    - by pdbartlett
    I have a simple FunSuite-based ScalaTest: package pdbartlett.hello_sbt import org.scalatest.FunSuite class SanityTest extends FunSuite { test("a simple test") { assert(true) } test("a very slightly more complicated test - purposely fails") { assert(42 === (6 * 9)) } } Which I'm running with the following SBT project config: import sbt._ class HelloSbtProject(info: ProjectInfo) extends DefaultProject(info) { // Dummy action, just to show config working OK. lazy val solveQ = task { println("42"); None } // Managed dependencies val scalatest = "org.scalatest" % "scalatest" % "1.0" % "test" } However, when I runsbt test I get the following warnings: ... [info] == test-compile == [info] Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed. [info] Compiling test sources... [info] Nothing to compile. [warn] Could not load superclass 'org.scalacheck.Properties' : java.lang.ClassNotFoundException: org.scalacheck.Properties [warn] Could not load superclass 'org.specs.Specification' : java.lang.ClassNotFoundException: org.specs.Specification [warn] Could not load superclass 'org.specs.Specification' : java.lang.ClassNotFoundException: org.specs.Specification [info] Post-analysis: 3 classes. [info] == test-compile == For the moment I'm assuming these are just "noise" (caused by the unified test interface?) and that I can safely ignore them. But it is slightly annoying to some inner OCD part of me (though not so annoying that I'm prepared to add dependencies for the other frameworks). Is this a correct assumption, or are there subtle errors in my test/config code? If it is safe to ignore, is there any other way to suppress these errors, or do people routinely include all three frameworks so they can pick and choose the best approach for different tests? TIA, Paul. (ADDED: scala v2.7.7 and sbt v0.7.4)

    Read the article

  • Using an element against an entire list in Haskell

    - by Snick
    I have an assignment and am currently caught in one section of what I'm trying to do. Without going in to specific detail here is the basic layout: I'm given a data element, f, that holds four different types inside (each with their own purpose): data F = F Float Int, Int a function: func :: F -> F-> Q Which takes two data elements and (by simple calculations) returns a type that is now an updated version of one of the types in the first f. I now have an entire list of these elements and need to run the given function using one data element and return the type's value (not the data element). My first analysis was to use a foldl function: myfunc :: F -> [F] -> Q myfunc y [] = func y y -- func deals with the same data element calls myfunc y (x:xs) = foldl func y (x:xs) however I keep getting the same error: "Couldn't match expected type 'F' against inferred type 'Q'. In the first argument of 'foldl', namely 'myfunc' In the expression: foldl func y (x:xs) I apologise for such an abstract analysis on my problem but could anyone give me an idea as to what I should do? Should I even use a fold function or is there recursion I'm not thinking about?

    Read the article

  • SPSS - sum of squares change radically with slight model changes in ANOVA??

    - by Pat
    I have noticed that the sum of squares in my models can change fairly radically with even the slightest adjustment to my models???? Is this normal???? I'm using SPSS 16, and both models presented below used the same data and variables with only one small change - categorizing one of the variables as either a 2 level or 3 level variable. Details - using a 2 x 2 x 6 mixed model ANOVA with the 6 being the repeated measure i get the following in the between group analysis ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 4086.46 | 1 | 4086.46 | 104.93 | .000 X | 224.61 | 1 | 224.61 | 5.77 | .019 Y | 2.60 | 1 | 2.60 | .07 | .80 X by Y | 19.25 | 1 | 19.25 | .49 | .49 Error | 2570.40 | 66 | 38.95 | Then, when I use the exact same data but a slightly different model in which variable Y has 3 levels instead of 2 levels I get the following ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 3603.88 | 1 | 3603.88 | 90.89 | .000 X | 171.89 | 1 | 171.89 | 4.34 | .041 Y | 19.23 | 2 | 9.62 | .24 | .79 X by Y | 17.90 | 2 | 17.90 | .80 | .80 Error | 2537.76 | 64 | 39.65 | I don't understand why variable X would have a different sum of squares simply because variable Y gets devided up into 3 levels instead of 2. This is also the case in the within groups analysis too. Please help me understand :D Thank you in advance Pat

    Read the article

  • Git add not working with .png files?

    - by D Lawson
    I have a dirty working tree, dirty because I made changes to source files and touched up some images. I was trying to add just the images to the index, so I ran this command: git add *.png But, this doesn't add the files. There were a few new image files that were added, but none of the ones that were modified/pre-existing were added. What gives? Edit: Here is some relevant terminal output $ git status # On branch master # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: src/main/java/net/plugins/analysis/FormMatcher.java # modified: src/main/resources/icons/doctor_edit_male.png # modified: src/main/resources/icons/doctor_female.png # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # src/main/resources/icons/arrow_up.png # src/main/resources/icons/bullet_arrow_down.png # src/main/resources/icons/bullet_arrow_up.png no changes added to commit (use "git add" and/or "git commit -a") Then executed "git add *.png" (no output after command) Then: $ git status # On branch master # # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # new file: src/main/resources/icons/arrow_up.png # new file: src/main/resources/icons/bullet_arrow_down.png # new file: src/main/resources/icons/bullet_arrow_up.png # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: src/main/java/net/plugins/analysis/FormMatcher.java # modified: src/main/resources/icons/doctor_edit_female.png # modified: src/main/resources/icons/doctor_edit_male.png

    Read the article

  • Where should test classes be stored in the project?

    - by limc
    I build all my web projects at work using RAD/Eclipse, and I'm interested to know where do you guys normally store your test's *.class files. All my web projects have 2 source folders: "src" for source and "test" for testcases. The generated *.class files for both source folders are currently placed under WebContent/WEB-INF/classes folder. I want to separate the test *.class files from the src *.class files for 2 reasons:- There's no point to store them in WebContent/WEB-INF/classes and deploy them in production. Sonar and some other static code analysis tools don't produce an accurate static code analysis because it takes account of my crappy yet correct testcase code. So, right now, I have the following output folders:- "src" source folder compiles to WebContent/WEB-INF/classes folder. "test" source folder compiles to target/test-classes folder. Now, I'm getting this warning from RAD:- Broken single-root rule: A project may not contain more than one output folder. So, it seems like Eclipse-based IDEs prefer one project = one output folder, yet it provides an option for me to set up a custom output folder for my additional source folder from the "build path" dialog, and then it barks at me. I know I can just disable this warning myself, but I want to know how you guys handle this. Thanks.

    Read the article

  • Dimension Reduction in Categorical Data with missing values

    - by user227290
    I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty five percent of the data hava at least one missing value). There is no a priori theory to choose the specification of the model so one of the key tasks is dimension reduction before running the regression. While I am aware of several methods for dimension reduction for continuous variables I am not aware of a similar statical literature for categorical data (except, perhaps, as a part of correspondence analysis which is basically a variation of principal component analysis on frequency table). Let me also add that the dataset is of moderate size 500000 observations with 200 variables. I have two questions. Is there a good statistical reference out there for dimension reduction for categorical data along with robust imputation (I think the first issue is imputation and then dimension reduction)? This is linked to implementation of above problem. I have used R extensively earlier and tend to use transcan and impute function heavily for continuous variables and use a variation of tree method to impute categorical values. I have a working knowledge of Python so if something is nice out there for this purpose then I will use it. Any implementation pointers in python or R will be of great help. Thank you.

    Read the article

  • What type of data store should I use for my ios app?

    - by mwiederrecht
    I am pretty new to ios and using servers so forgive me. I am building an ios app for research. I need to monitor things that the user does and then push it up to a server for analysis (yes, with user and IRB permission). On the client's side I need to keep quite a bit of data that won't really change except in the case of pulling an updated version from the server, and then a minimal amount of user-specific data. Most of the data I will collect needs to be pushed to a server for analysis and then can be deleted from the client side. I am struggling to figure out what kind of data store I need to use, especially since I am not quite sure how the pushing and pulling from the server process works yet. Does it make sense to use Core Data? XML? SQLite? I like the Core Data idea, but I am not sure what kind of problems I will run into when I need to send large amounts of data to it and from it from the server. I imagine I might need to send data in a different form than it is probably stored in on either end - so what kind of overhead am I likely to run into in the process of converting that data? Is there a good format to save stuff in that would work well for me on both ends AND for sending the data? As you can probably tell, I could use some advice. Thanks!

    Read the article

  • Python2.7: How can I speed up this bit of code (loop/lists/tuple optimization)?

    - by user89
    I repeat the following idiom again and again. I read from a large file (sometimes, up to 1.2 million records!) and store the output into an SQLite databse. Putting stuff into the SQLite DB seems to be fairly fast. def readerFunction(recordSize, recordFormat, connection, outputDirectory, outputFile, numObjects): insertString = "insert into NODE_DISP_INFO(node, analysis, timeStep, H1_translation, H2_translation, V_translation, H1_rotation, H2_rotation, V_rotation) values (?, ?, ?, ?, ?, ?, ?, ?, ?)" analysisNumber = int(outputPath[-3:]) outputFileObject = open(os.path.join(outputDirectory, outputFile), "rb") outputFileObject, numberOfRecordsInFileObject = determineNumberOfRecordsInFileObjectGivenRecordSize(recordSize, outputFileObject) numberOfRecordsPerObject = (numberOfRecordsInFileObject//numberOfObjects) loop1StartTime = time.time() for i in range(numberOfRecordsPerObject ): processedRecords = [] loop2StartTime = time.time() for j in range(numberOfObjects): fout = outputFileObject .read(recordSize) processedRecords.append(tuple([j+1, analysisNumber, i] + [x for x in list(struct.unpack(recordFormat, fout))])) loop2EndTime = time.time() print "Time taken to finish loop2: {}".format(loop2EndTime-loop2StartTime) dbInsertStartTime = time.time() connection.executemany(insertString, processedRecords) dbInsertEndTime = time.time() loop1EndTime = time.time() print "Time taken to finish loop1: {}".format(loop1EndTime-loop1StartTime) outputFileObject.close() print "Finished reading output file for analysis {}...".format(analysisNumber) When I run the code, it seems that "loop 2" and "inserting into the database" is where most execution time is spent. Average "loop 2" time is 0.003s, but it is run up to 50,000 times, in some analyses. The time spent putting stuff into the database is about the same: 0.004s. Currently, I am inserting into the database every time after loop2 finishes so that I don't have to deal with running out RAM. What could I do to speed up "loop 2"?

    Read the article

  • How can I call multiple nutrient information from ESHA Research API? (apid.esha.com)

    - by user1833044
    I want to call ESHA Research nutrient REST API. I cannot seem to figure out how to call multiple nutrients using ESHA REST API. So far I am calling the following and only able to retrieve the calories, or protein, or another type of nutrient information. So I was hoping someone had experience in retrieving all the nutrient information with one call. Is this possible? This is how I call to retrieve the TWIX nutrient http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da (returns calories, please note the api key is not xxxx but instead a key generated from Esha once you sign up as developer) The return is JSON format. If I want to call fat it would be the following http://api.esha.com/analysis?apikey=xxxx&fo=urn:uuid:81d268ac-f1dc-4991-98c1-1b4d3a5006da&n=urn:uuid:589294dc-3dcc-4b64-be06-c07e7f65c4bd How can I make a call once and get a return of all the nutrients (so Fat, Calories, Carbs, Vitamins, etc..) for a particular food ID? I have researched and looked at this for a while and cannot seem to find the answer. Thanks in advance for your help.

    Read the article

  • Video and LAN Cards are not recognized on Intel DG41RQ MB

    - by artyom
    Hello, My 32 bit Debian Etch with Etch-n-Half kernel 2.6.24 and Etch-n-half video driver xserver-xorg-video-intel 2.2.1-1~etchnhalf2 does not recognize my on-board LAN and Video. I can work only with VESA display. And additional PCI LAN card (that I wanted to use for Cross-X-Cabel). output of lspci 00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03) 00:01.0 PCI bridge: Intel Corporation 4 Series Chipset PCI Express Root Port (rev 03) 00:02.0 VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) 00:1b.0 Audio device: Intel Corporation 82801G (ICH7 Family) High Definition Audio Controller (rev 01) 00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 (rev 01) 00:1c.1 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 2 (rev 01) 00:1d.0 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #1 (rev 01) 00:1d.1 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #2 (rev 01) 00:1d.2 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #3 (rev 01) 00:1d.3 USB Controller: Intel Corporation 82801G (ICH7 Family) USB UHCI Controller #4 (rev 01) 00:1d.7 USB Controller: Intel Corporation 82801G (ICH7 Family) USB2 EHCI Controller (rev 01) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev e1) 00:1f.0 ISA bridge: Intel Corporation 82801GB/GR (ICH7 Family) LPC Interface Bridge (rev 01) 00:1f.1 IDE interface: Intel Corporation 82801G (ICH7 Family) IDE Controller (rev 01) 00:1f.2 IDE interface: Intel Corporation 82801GB/GR/GH (ICH7 Family) SATA IDE Controller (rev 01) 00:1f.3 SMBus: Intel Corporation 82801G (ICH7 Family) SMBus Controller (rev 01) 04:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10 According to MB Spec: Video: Intel G41 Express Chipset Graphics and Memory Controller Hub Graphics (GMCH) LAN: Realtek 8111D Gigabit Ethernet Controller LAN Support When I run dpkg-reconfigure xserver-xorg is suggest i810 driver but X does not start, it does not start with intel driver as well; only vesa. LAN I even can't find in lspci... the last Realtek card is external one 100MB card. Where to start from?

    Read the article

  • KVM recommendations

    - by alex
    I recently tried out a cheaper KVM solution, a dual monitor one with a USB hub and audio/mic support. It seemed the perfect KVM. However, these are my experiences. When using my keyboard via the PS/2 connection, none of my multimedia keys work (useful, because it has a volume control and my speakers do not, only on a wireless remote control.) I plugged the keyboard in via the USB port - and it seemed to work. However, I believe to switch the hub from PC to Mac, you need to use a keyboard combo, which is only supported when the keyboard is plugged in via PS/2 Sometimes the middle mouse button doesn't work when connected via PS/2. The multi monitor is VGA - I just found out by the looks of things my Mac Mini outputs DVI Digital only (though this is my fault!). Mac works with 2 screens, but switching and switching back can cause it not to display on the 2nd screen unless I go detect displays again. My question is - is there a KVM out there that supports these features? USB keyboard and mouse inputs will full multimedia keys support Dual monitor DVI connections Hotkey to change PCs and physical button USB hub Emulate screens attached when switching to a different computer Works with PC and Mac Audio / Microphone support Does one exist, that won't cost the world? So far the only one that seems to support all this (that I can tell) is this one. UPDATE I ended up buying the Aten CS1642. It's expensive, but it seems to work great!

    Read the article

  • svchost consuming more than 50% CPU all the time in windows 7

    - by claws
    Hello, I'm using windows 7 ultimate. svchost containing DCOM Server Process Launcher Plug and Play Power services is consuming more than 50% of CPU for most of the time. I found this blog post: http://blog.hansmelis.be/2007/06/17/windows-vista-long-delay-when-switching-songs-in-media-player/ That process is associated with two services: DCOM Server Process Launcher and Plug and Play. For the Vulcans among us, all logic stops there for a second. What do those two services have to do with WMP? The answer is provided by Vista's new audio engine. The new engine supports several audio "enhancements". But for the enhancements to work, the engine needs to determine if your hardware is up to the task. And when does it check that? Each time a sound output device is accessed. That's pretty nice if you can do a hot swap of sound hardware, but I don't see me doing that anytime soon. Anyways, it does provide us with the link to the correct service because checking hardware is done by the "Plug and Play" service. One might think that deactivating each enhancement would solve the problem, but that's wishful thinking. The configuration of the enhancements is located in the properties of the sound hardware. When opening the tab, I found out that no enhancements were active. Hmmm... so why does it check the hardware? Well, it does that in case you actually enable an enhancement. To completely stop the hardware checking, you have to tick the box labelled Disable all enhancements. As soon as you do that, Vista finally understands you don't want to use them buts thats for vista. Is it the same case with windows 7 too? and I couldn't find any "Disable all enhancements" in my controlpanelsounds (mmsys.cpl). Where can I find this option in windows 7? How to solve this?

    Read the article

  • Why can’t two programs access my webcam simultaneously?

    - by qdii
    I first launch cheese and my webcam turns on. I then run vlc to grab the output of /dev/video0 but it fails with: [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea40012e8] v4l2 demux error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3ea4002168] v4l2 access error: cannot set input 0: Device or resource busy [0x7f3eb4000b78] main input error: open of `v4l2:///dev/video0' failed Whatever pair of video programs I run (skype, cheese, vlc, etc.), the result is always the same: the second program can no longer use the webcam when the first one has already grabbed the output. However I find it curious as video4linux states: In general, V4L2 devices can be opened more than once. When this is supported by the driver, users can for example start a "panel" application to change controls like brightness or audio volume, while another application captures video and audio. My webcam is seen in lspci as 058f:a014 Alcor Micro Corp. Asus Integrated Webcam, but I don’t even know what the underlying driver is, so I can’t check whether my problem is driver-related or not. Any input would be more than welcome!

    Read the article

  • Broken Creative DTT3500 Digital

    - by djechelon
    Hello, many years ago I bought a Creative DTT3500 Digital surround speakers system. It's been a while since my home theatre started behaving strange. During the past 6 months, I have noticed that the amplifier needed some time before correctly playing digital audio. Until a couple of weeks ago, it needed up to 5 minutes of very noisy "warm up" before playing an "almost" clear audio. Today, I found that even using the analog input (completely disabling digital inputs) the noise persists. Powering off and on the speakers doesn't work. Now, it's reasonable that after almost 10 years, the amplifier got broken. My problem is that I can't simply replace the speakers with another kit from another brand, since 2 speakers in my room are placed in the back wall, with coaxial cables going under the floor but not completely inside a single tube. I mean, I have coaxial extension cables starting from the DTT3500 amplifier, going into the wall, then under the floor in a plastic cable tube, then exiting the tube (still under the floor), connecting to the male coaxial connector of the original cable supplied by Creative, and continuing up to the point close to my bed where the speakers are. I could replace the speakers **only as soon as the new speakers have the same coaxial to bi-polar cables that Creative ships with DTT3500 Another option might be "repairing the amplifier", but I don't know if it's feasible, what should I or a technician look for in the amplifier, and how much would it cost (is it worth to repair or buy a new kit considering the problem above? What do you suggest me to do, basing on the above observation? Thank you.

    Read the article

  • Debian amd64 on Dell Studio 540 reboot hangs

    - by Shcheklein
    Hi, I have Dell Studio 540 desktop and Debian Lenny installed on it: 2.6.26-2-amd64 #1 SMP Tue Mar 9 22:29:32 UTC 2010 x86_64 GNU/Linux The problem is that I can't reboot it. It just hangs after "Will now restart" message. I've already tried: reboot=b, reboot=a, reboot=h kernel options. Nothing helps. Additional info (I can provide any other information): dmidecode System Information Manufacturer: Dell Inc. Product Name: Studio 540 lspci 00:00.0 Host bridge: Intel Corporation 4 Series Chipset DRAM Controller (rev 03) 00:02.0 VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) 00:1a.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #4 00:1a.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #5 00:1a.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #6 00:1a.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #2 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller 00:1c.0 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Port 1 00:1c.2 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Port 3 00:1c.5 PCI bridge: Intel Corporation 82801JI (ICH10 Family) PCI Express Port 6 00:1d.0 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #1 00:1d.1 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #2 00:1d.2 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB UHCI Controller #3 00:1d.7 USB Controller: Intel Corporation 82801JI (ICH10 Family) USB2 EHCI Controller #1 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90) 00:1f.0 ISA bridge: Intel Corporation 82801JIR (ICH10R) LPC Interface Controller 00:1f.2 IDE interface: Intel Corporation 82801JI (ICH10 Family) 4 port SATA IDE Controller 00:1f.3 SMBus: Intel Corporation 82801JI (ICH10 Family) SMBus Controller 00:1f.5 IDE interface: Intel Corporation 82801JI (ICH10 Family) 2 port SATA IDE Controller 02:00.0 FireWire (IEEE 1394): JMicron Technologies, Inc. Device 2380 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)

    Read the article

  • How to rip AVI from VOB using ffmpeg

    - by Linux Jedi
    I am trying to convert a VOB to an AVI. I have ripped an AVI from this VOB before using ffmpeg, but for some reason it's not working this time. This is what I tried: ffmpeg -sameq -acodec copy -i VTS_01_2.VOB output.avi This is the output I get: FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers built on Dec 29 2010 18:02:10 with gcc 4.2.1 (Apple Inc. build 5664) configuration: libavutil 50.15. 1 / 50.15. 1 libavcodec 52.72. 2 / 52.72. 2 libavformat 52.64. 2 / 52.64. 2 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0.11. 0 / 0.11. 0 [mpeg2video @ 0x101014200]mpeg_decode_postinit() failure Last message repeated 6 times Input #0, mpeg, from 'VTS_01_2.VOB': Duration: 26:30:29.20, start: 140.171311, bitrate: 90 kb/s Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720x480 [PAR 32:27 DAR 16:9], 9800 kb/s, 31.44 fps, 29.97 tbr, 90k tbn, 59.94 tbc Stream #0.1[0xa0]: Audio: pcm_s16be, 48000 Hz, 2 channels, s16, 1536 kb/s Output #0, avi, to 'output.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 720x480 [PAR 32:27 DAR 16:9], q=2-31, 200 kb/s, 29.97 tbn, 29.97 tbc Stream #0.1: Audio: pcm_s16be, 48000 Hz, 2 channels, 1536 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Could not write header for output file #0 (incorrect codec parameters ?)

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >