Search Results

Search found 1620 results on 65 pages for 'proc'.

Page 40/65 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • How to avoid Remove-Item PowerShell errors "process cannot access the file"?

    - by Michael Freidgeim
    We are using TfsDeployer and PowerShell script to remove the folders ising Remove-Item before deployment of a new version. Sometimes the PS script failed with the error Remove-Item : Cannot remove item Services\bin: The process cannot access the file Services\bin' because it is being used by another proc Get-ChildItem -Path $Destination -Recurse | Remove-Item <<<< -force -recurse + CategoryInfo : WriteError: (C:\Program File..\Services\bin:DirectoryInfo) [Remove-Item], IOException FullyQualifiedErrorId : RemoveFileSystemItemIOError,Microsoft.PowerShell.Commands.RemoveItemCommand I’ve tried to follow the answer from PowerShell remove force to pipe get-childitem -recurse into remove-item. get-childitem * -include *.csv -recurse | remove-item ,but the error still happens periodically. We are using unlocker to manually kill locking application, (it’s usually w3wp), but I prefer to find automated solution. Another (not ideal) option is to-suppress-powershell-errors get-childitem -recurse -force -erroraction silentlycontinue Any suggestions are welcome.

    Read the article

  • OOMK kills mysql and apache when there is still a lot[?] of mem

    - by Flyer
    let me first say that I'm pretty new ti *nix systems and even more to server management. Anyway, I've got a little problem. I got VPS with 1gb mem, system is debian 6. I have few sites running on it, though some load can only be caused by one of them. Recently, OOMK started to kill mysql, causing wp and phpbb giving error that it can't connect to mysql server. Error itself is not good, especially if it happens at night and site becomes unavailable until I wake up and restart mysql. I have probably bad line in my cron which can be cause of it all (again, I'm new to it) */20 * * * * sync; echo 3 > /proc/sys/vm/drop_caches Well, if you need any information, let me know, since I don't really know which information can be useful here. Also, I'd like to know if it's not too bad to have above cron task.

    Read the article

  • High system cpu load (%sys), system locks

    - by Mark
    For the last two weeks we are having intermittent severe spikes in system cpu usage (shown as %sys), which last for maybe half a minute, locking most processes, including ssh. I've been trying to figure this out, but atop doesn't show anything relevant (system usage for processes it shows is insignificant), spikes are intermittent and I could not reproduce the spike using any workload for the web application this webserver hosts. If you have any ideas on how to debug high %sys and (sometimes) %si CPU usage, please share them. System specs (don't know if any of this is relevant): Dedicated server, CentOS 6, core i7 950, consistent 4 to 8 GB RAM free at any time, hard drives are in RAID-1. Additional info: dmesg output doesn't change between spikes /var/log/messages doesn't change between spikes Here is cat /proc/vmstat Here is output of mpstat 1 during a typical spike Add 07.11.11: looks like simple reboot restored system state, and we might never know what caused the disturbance in first place.

    Read the article

  • MySQL server simple insert/update/delete queries are taking a long time to execute

    - by ElGabbu
    We have a VPS hosting server with a MySQL server running on it. We host several databases for client's websites. Recently we have noticed that insert/update and delete queries are taking a long time to execute sometimes as close as 30 seconds. I use the following command to see these queries being executed: watch -n1 mysqladmin proc stat We have still not been able to track the root of this problem. I would apprecite if anyone had any pointers as to what we can check or improve to resolve the issue. Thanks

    Read the article

  • What command line tools for monitoring host network activity on linux do you use?

    - by user27388
    What command line tools are good for reliably monitoring network activity? I have used ifconfig, but an office colleague said that its statistics are not always reliable. Is that true? I have recently used ethtool, but is it reliable? What about just looking at /proc/net 'files'? Is that any better? EDIT I'm interested in packets Tx/Rx, bytes Tx/Rx, but most importantly drops or errors and why the drop/error might have occurred.

    Read the article

  • Dropping Cached Memory on FreeBSD

    - by user1066698
    i use FreeNAS server which is built on OS version FreeBSD 8.2-RELEASE-p6. I use ZFS file system with 13TB HDD on my 8GB physical ram installed box. It almost uses all of RAM installed while proccessing some request. However, it still uses same amount of memory on idle times. So this is becoming a problem sometimes. On my centos web server; i use following command to drop cached memory with a cronjob; sync; echo 3 > /proc/sys/vm/drop_caches However, this command does not work on my Freenas server. How can i drop cached memory on my FreeNAS box which is built on FreeBSD 8.2 Thank you

    Read the article

  • PPTP VPN Server issue : server = centOS & client = windows 7

    - by jmassic
    I have a CentOS server configured as a PPTP VPN Server. The client is a Windows 7 with "Use default gateway on remote network" in advanced TCP/IPv4 properties enable. He can connect to CentOS without any problem and can access to: The Box of his ISP (http://192.168.1.254/) The CentOS server The website which is hosted by the server (through http://) But he canNOT access any other web service (google.com or 74.125.230.224) I am a beginner with web servers so I do not know what can cause this problem. Note 0 : The Windows 7 user must be able to access the whole internet through the CentOS PPTP proxy. Note 1 : With "Use default gateway on remote network" in advanced TCP/IPv4 UNCHECKED it is the same problem Note 2 : With "Use default gateway on remote network" in advanced TCP/IPv4 UNCHECKED AND "disable class based route addition" CHECKED the Win 7 can access google but with the ISP IP (no use of the VPN...) See Screenshot Note 3 : I have made a echo 1 > /proc/sys/net/ipv4/ip_forward and a iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

    Read the article

  • Testifying that somaxconn change can make a difference

    - by petermolnar
    I got into an argument on the net.core.somaxconn parameter: I was told that it will not make any difference if we change the default 128. I believed this might be enough proof: "If the backlog argument is greater than the value in /proc/sys/net/core/somaxconn, then it is silently truncated to that value" http://linux.die.net/man/2/listen but it's not. So does anyone know a method to testify this with two machines, one running, for example MySQL or LVS and the other is hammering it in a GBit network? I'm opened to any solution, scripts are slightly more welcome.

    Read the article

  • Install correct libraries depending on 64/32 bit

    - by Rich
    I am using Bash to install a customised version of JBoss, and one of the things I would like to do is install the correct version of the Apache Portable Runtime, which is a native binary. This script could be run on both 32 and 64 bit versions of RHEL. What are my options for identifying which version of the APR to install? I think we only have 32bit and x64-based systems here. I would still like to identify i64 systems so that the script can refuse to install on that type of machine. I am aware of using uname -m and grepping /proc/cpuinfo to find out, but was wondering which approach others would recommend?

    Read the article

  • Trouble getting OS fingerprinting to work in iptables

    - by user1197457
    Everyone, As I understand it, OSF has been merged with the Kernel since 2.6.before-my-kernel-version. Yet when I do something like this: iptables -I INPUT -j ACCEPT -p tcp -m osf --genre Linux --log 0 --ttl 2 and I get an error like: iptables: No chain/target/match by that name iptables -L Shows no rules because I did an iptables -F at one point. ALSO, the following command: cat /proc/net/ip_tables_matches Does not show "osf" on the list. A google doesn't seem to help. I've also installed iptables-devel in hopes I'd be able to load the osf module. Sadly I haven't been able to get that to work. Centos 6.4 minimal Any guidance?

    Read the article

  • Calling Grep inside Java gives incorrect results while calling grep in shell gives correct results.

    - by futureelite7
    I've got a problem where calling grep from inside java gives incorrect results, as compared to the results from calling grep on the same file in the shell. My grep command (called both in Java and in bash. I escaped the slash in Java accordingly): /bin/grep -vP --regexp='^[0-9]+\t.*' /usr/local/apache-tomcat-6.0.18/work/Catalina/localhost/saccitic/237482319867147879_1271411421 Java Code: String filepath = "/path/to/file"; String options = "P"; String grepparams = "^[0-9]+\\t.*"; String greppath = "/bin/"; String[] localeArray = new String[] { "LANG=", "LC_COLLATE=C", "LC_CTYPE=UTF-8", "LC_MESSAGES=C", "LC_MONETARY=C", "LC_NUMERIC=C", "LC_TIME=C", "LC_ALL=" }; options = "v"+options; //Assign optional params if (options.contains("P")) { grepparams = "\'"+grepparams+"\'"; //Quote the regex expression if -P flag is used } else { options = "E"+options; //equivalent to calling egrep } proc = sysRuntime.exec(greppath+"/grep -"+options+" --regexp="+grepparams+" "+filepath, localeArray); System.out.println(greppath+"/grep -"+options+" --regexp="+grepparams+" "+filepath); inStream = proc.getInputStream(); The command is supposed to match and discard strings like these: 85295371616 Hi Mr Lee, please be informed that... My input file is this: 85aaa234567 Hi Ms Chan, please be informed that... 85292vx5678 Hi Mrs Ng, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85aaa234567 Hi Ms Chan, please be informed that... 85292vx5678 Hi Mrs Ng, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 8~!95371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 852&^*&1616 Hi Mr Lee, please be informed that... 8529537Ax16 Hi Mr Lee, please be informed that... 85====ppq16 Hi Mr Lee, please be informed that... 85291234783 a3283784428349247233834728482984723333 85219299222 The commands works when I call it from inside bash (Results below): 85aaa234567 Hi Ms Chan, please be informed that... 85292vx5678 Hi Mrs Ng, please be informed that... 85aaa234567 Hi Ms Chan, please be informed that... 85292vx5678 Hi Mrs Ng, please be informed that... 8~!95371616 Hi Mr Lee, please be informed that... 852&^*&1616 Hi Mr Lee, please be informed that... 8529537Ax16 Hi Mr Lee, please be informed that... 85====ppq16 Hi Mr Lee, please be informed that... 85219299222 However, when I call grep again inside java, I get the entire file (Results below): 85aaa234567 Hi Ms Chan, please be informed that... 85292vx5678 Hi Mrs Ng, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85aaa234567 Hi Ms Chan, please be informed that... 85292vx5678 Hi Mrs Ng, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 8~!95371616 Hi Mr Lee, please be informed that... 85295371616 Hi Mr Lee, please be informed that... 852&^*&1616 Hi Mr Lee, please be informed that... 8529537Ax16 Hi Mr Lee, please be informed that... 85====ppq16 Hi Mr Lee, please be informed that... 85291234783 a3283784428349247233834728482984723333 85219299222 What could be the problem that will cause the grep called by Java to return incorrect results? I tried passing local information via the environment string array in runtime.exec, but nothing seems to change. Am I passing in the locale information incorrectly, or is the problem something else entirely?

    Read the article

  • Marking multi-level nested forms as "dirty" in Rails

    - by Charles Kihe
    I have a three-level multi-nested form in Rails. The setup is like this: Projects have many Milestones, and Milestones have many Notes. The goal is to have everything editable within the page with JavaScript, where we can add multiple new Milestones to a Project within the page, and add new Notes to new and existing Milestones. Everything works as expected, except that when I add new notes to an existing Milestone (new Milestones work fine when adding notes to them), the new notes won't save unless I edit any of the fields that actually belong to the Milestone to mark the form "dirty"/edited. Is there a way to flag the Milestone so that the new Notes that have been added will save? Edit: sorry, it's hard to paste in all of the code because there's so many parts, but here goes: Models class Project < ActiveRecord::Base has_many :notes, :dependent => :destroy has_many :milestones, :dependent => :destroy accepts_nested_attributes_for :milestones, :allow_destroy => true accepts_nested_attributes_for :notes, :allow_destroy => true, :reject_if => proc { |attributes| attributes['content'].blank? } end class Milestone < ActiveRecord::Base belongs_to :project has_many :notes, :dependent => :destroy accepts_nested_attributes_for :notes, :allow_destroy => true, :allow_destroy => true, :reject_if => proc { |attributes| attributes['content'].blank? } end class Note < ActiveRecord::Base belongs_to :milestone belongs_to :project scope :newest, lambda { |*args| order('created_at DESC').limit(*args.first || 3) } end I'm using an jQuery-based, unobtrusive version of Ryan Bates' combo helper/JS code to get this done. Application Helper def add_fields_for_association(f, association, partial) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(partial, :f => builder) end end I render the form for the association in a hidden div, and then use the following JavaScript to find it and add it as needed. JavaScript function addFields(link, association, content, func) { var newID = new Date().getTime(); var regexp = new RegExp("new_" + association, "g"); var form = content.replace(regexp, newID); var link = $(link).parent().next().before(form).prev(); if (func) { func.call(); } return link; } I'm guessing the only other relevant piece of code that I can think of would be the create method in the NotesController: def create respond_with(@note = @owner.notes.create(params[:note])) do |format| format.js { render :json => @owner.notes.newest(3).all.to_json } format.html { redirect_to((@milestone ? [@project, @milestone, @note] : [@project, @note]), :notice => 'Note was successfully created.') } end end The @owner ivar is created in the following before filter: def load_milestone @milestone = @project.milestones.find(params[:milestone_id]) if params[:milestone_id] end def determine_owner @owner = load_milestone @owner ||= @project end Thing is, all this seems to work fine, except when I'm adding new notes to existing milestones. The milestone has to be "touched" in order for new notes to save, or else Rails won't pay attention.

    Read the article

  • SQL server deadlock between INSERT and SELECT statement

    - by dtroy
    Hi! I've got a problem with multiple deadlocks on SQL server 2005. This one is between an INSERT and a SELECT statement. There are two tables. Table 1 and Table2. Table2 has Table1's PK (table1_id) as foreign key. Index on table1_id is clustered. The INSERT inserts a single row into table2 at a time. The SELCET joins the 2 tables. (it's a long query which might take up to 12 secs to run) According to my understanding (and experiments) the INSERT should acquire an IS lock on table1 to check referential integrity (which should not cause a deadlock). But, in this case it acquired an IX page lock The deadlock report: <deadlock-list> <deadlock victim="process968898"> <process-list> <process id="process8db1f8" taskpriority="0" logused="2424" waitresource="OBJECT: 5:789577851:0 " waittime="12390" ownerId="61831512" transactionname="user_transaction" lasttranstarted="2010-04-16T07:10:13.347" XDES="0x222a8250" lockMode="IX" schedulerid="1" kpid="3764" status="suspended" spid="52" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:13.350" lastbatchcompleted="2010-04-16T07:10:13.347" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read uncommitted (1)" xactid="61831512" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcTable2_Insert" line="18" stmtstart="576" stmtend="1148" sqlhandle="0x0300050079e62d06e9307f000b9d00000100000000000000"> INSERT INTO dbo.Table2 ( f1, table1_id, f2 ) VALUES ( @p1, @p_DocumentVersionID, @p1 ) </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 103671417] </inputbuf> </process> <process id="process968898" taskpriority="0" logused="0" waitresource="PAGE: 5:1:46510" waittime="7625" ownerId="61831406" transactionname="INSERT" lasttranstarted="2010-04-16T07:10:12.717" XDES="0x418ec00" lockMode="S" schedulerid="2" kpid="1724" status="suspended" spid="53" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:12.713" lastbatchcompleted="2010-04-16T07:10:12.713" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read committed (2)" xactid="61831406" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcGetList" line="64" stmtstart="3548" stmtend="11570" sqlhandle="0x03000500dbcec17e8d267f000b9d00000100000000000000"> <!-- XXXXXXXXXXXXXX...SELECT STATEMENT WITH Multiple joins including both Table2 table 1 and .... XXXXXXXXXXXXXXX --> </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 2126630619] </inputbuf> </process> </process-list> <resource-list> <pagelock fileid="1" pageid="46510" dbid="5" objectname="DatabaseName.dbo.table1" id="lock6236bc0" mode="IX" associatedObjectId="72057594042908672"> <owner-list> <owner id="process8db1f8" mode="IX"/> </owner-list> <waiter-list> <waiter id="process968898" mode="S" requestType="wait"/> </waiter-list> </pagelock> <objectlock lockPartition="0" objid="789577851" subresource="FULL" dbid="5" objectname="DatabaseName.dbo.Table2" id="lock970a240" mode="S" associatedObjectId="789577851"> <owner-list> <owner id="process968898" mode="S"/> </owner-list> <waiter-list> <waiter id="process8db1f8" mode="IX" requestType="wait"/> </waiter-list> </objectlock> </resource-list> </deadlock> </deadlock-list> Can anyone explain why the INSERT gets the IX page lock ? Am I not reading the deadlock report properly? BTW, I have not managed to reproduce this issue. Thanks!

    Read the article

  • How should I clean up hung grandchild processes when an alarm trips in Perl?

    - by brian d foy
    I have a parallelized automation script which needs to call many other scripts, some of which hang because they (incorrectly) wait for standard input. That's not a big deal because I catch those with alarm. The trick is to shut down those hung grandchild processes when the child shuts down. I thought various incantations of SIGCHLD, waiting, and process groups could do the trick, but they all block and the grandchildren aren't reaped. My solution, which works, just doesn't seem like it is the right solution. I'm not especially interested in the Windows solution just yet, but I'll eventually need that too. Mine only works for Unix, which is fine for now. I wrote a small script that takes the number of simultaneous parallel children to run and the total number of forks: $ fork_bomb <parallel jobs> <number of forks> $ fork_bomb 8 500 This will probably hit the per-user process limit within a couple of minutes. Many solutions I've found just tell you to increase the per-user process limit, but I need this to run about 300,000 times, so that isn't going to work. Similarly, suggestions to re-exec and so on to clear the process table aren't what I need. I'd like to actually fix the problem instead of slapping duct tape over it. I crawl the process table looking for the child processes and shut down the hung processes individually in the SIGALRM handler, which needs to die because the rest of real code has no hope of success after that. The kludgey crawl through the process table doesn't bother me from a performance perspective, but I wouldn't mind not doing it: use Parallel::ForkManager; use Proc::ProcessTable; my $pm = Parallel::ForkManager->new( $ARGV[0] ); my $alarm_sub = sub { kill 9, map { $_->{pid} } grep { $_->{ppid} == $$ } @{ Proc::ProcessTable->new->table }; die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } If you want to run out of processes, take out the kill. I thought that setting a process group would work so I could kill everything together, but that blocks: my $alarm_sub = sub { kill 9, -$$; # blocks here die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; setpgrp(0, 0); local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } The same thing with POSIX's setsid didn't work either, and I think that actually broke things in a different way since I'm not really daemonizing this. Curiously, Parallel::ForkManager's run_on_finish happens too late for the same clean-up code: the grandchildren are apparently already disassociated from the child processes at that point.

    Read the article

  • Java Refuses to Start - Could not reserve enough space for object heap

    - by Randyaa
    Background We have a pool of aproximately 20 linux blades. Some are running Suse, some are running Redhat. ALL share NAS space which contains the following 3 folders: /NAS/app/java - a symlink that points to an installation of a Java JDK. Currently version 1.5.0_10 /NAS/app/lib - a symlink that points to a version of our application. /NAS/data - directory where our output is written All our machines have 2 processors (hyperthreaded) with 4gb of physical memory and 4gb of swap space. We limit the number of 'jobs' each machine can process at a given time to 6 (this number likely needs to change, but that does not enter into the current problem so please ignore it for the time being). Some of our jobs set a Max Heap size of 512mb, some others reserve a Max Heap size of 2048mb. Again, we realize we could go over our available memory if 6 jobs started on the same machine with the heap size set to 2048, but to our knowledge this has not yet occurred. The Problem Once and a while a Job will fail immediately with the following message: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. We used to chalk this up to too many jobs running at the same time on the same machine. The problem happened infrequently enough (MAYBE once a month) that we'd just restart it and everything would be fine. The problem has recently gotten much worse. All of our jobs which request a max heap size of 2048m fail immediately almost every time and need to get restarted several times before completing. We've gone out to individual machines and tried executing them manually with the same result. Debugging It turns out that the problem only exists for our SuSE boxes. The reason it has been happening more frequently is becuase we've been adding more machines, and the new ones are SuSE. 'cat /proc/version' on the SuSE boxes give us: Linux version 2.6.5-7.244-bigsmp (geeko@buildhost) (gcc version 3.3.3 (SuSE Linux)) #1 SMP Mon Dec 12 18:32:25 UTC 2005 'cat /proc/version' on the RedHat boxes give us: Linux version 2.4.21-32.0.1.ELsmp ([email protected]) (gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-52)) #1 SMP Tue May 17 17:52:23 EDT 2005 'uname -a' gives us the following on BOTH types of machines: UTC 2005 i686 i686 i386 GNU/Linux No jobs are running on the machine, and no other processes are utilizing much memory. All of the processes currently running might be using 100mb total. 'top' currently shows the following: Mem: 4146528k total, 3536360k used, 610168k free, 132136k buffers Swap: 4194288k total, 0k used, 4194288k free, 3283908k cached 'vmstat' currently shows the following: procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 610292 132136 3283908 0 0 0 2 26 15 0 0 100 0 If we kick off a job with the following command line (Max Heap of 1850mb) it starts fine: java/bin/java -Xmx1850M -cp helloworld.jar HelloWorld Hello World If we bump up the max heap size to 1875mb it fails: java/bin/java -Xmx1875M -cp helloworld.jar HelloWorld Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. It's quite clear that the memory currently being used is for Buffering/Caching and that's why so little is being displayed as 'free'. What isn't clear is why there is a magical 1850mb line where anything higher means Java can't start. Any explanations would be greatly appreciated.

    Read the article

  • Ruby on Rails is complaining about a method that doesn't exist that is built into Active Record. Wha

    - by grg-n-sox
    This will probably just be a simple problem and I am just blind or an idiot but I could use some help. So I am going over some basic guides in Rails, reviewing the basics and such for an upcoming exam. One of the guides included was the sort-of-standard getting started guide over at guide.rubyonrails.org. Here is the link if you need it. Also all my code is for my app is from there, so I have no problem releasing any of my code since it should be the same as shown there. I didn't do a copy paste, but I basically was typing with Vim in one half of my screen and the web page in the other half, typing what I see. http://guides.rubyonrails.org/getting_started.html So like I said, I am going along the guide when I noticed past a certain point in the tutorial, I was always getting an error on the site. To find the section of code, just hit Ctrl+f on the page (or whatever you have search/find set to) and enter "accepts_". This should immediately direct you to this chunk of code. class Post < ActiveRecord::Base validates_presence_of :name, :title validates_length_of :title, :minimum => 5 has_many :comments has_many :tags accepts_nested_attributes_for :tags, :allow_destroy => :true , :reject_if => proc { |attrs| attrs.all? { |k, v| v.blank? } } end So I tried putting this in my code. It is in ~/Rails/blog/app/models/post.rb in case you are wondering. However, even after all the other code I put in past that in the guide, hoping I was just missing some line of code that would come up later in the guide. But nothing, same error every time. This is what I get. NoMethodError in PostsController#index undefined method `accepts_nested_attributes_for' for #<Class:0xb7109f98> /usr/lib/ruby/gems/1.8/gems/activerecord-2.2.2/lib/active_record/base.rb:1833:in `method_missing' app/models/post.rb:7 app/controllers/posts_controller.rb:9:in `index' Request Parameters: None Response Headers: {"Content-Type"=>"", "cookie"=>[], "Cache-Control"=>"no-cache"} Now, I copied the above code from the guide. The two code sections I edited mentioned in the error message I will paste as is below. class PostsController < ApplicationController # GET /posts # GET /posts.xml before_filter :find_post, :only => [:show, :edit, :update, :destroy] def index @posts = Post.find(:all) # <= the line 9 referred to in error message respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } end end class Post < ActiveRecord::Base validates_presence_of :name, :title validates_length_of :title, :minimum => 5 has_many :comments has_many :tags accepts_nested_attributes_for :tags, :allow_destroy => :true , # <= problem :reject_if => proc { |attrs| attrs.all? { |k, v| v.blank? } } end Also here is gem local gem list. I do note that they are a bit out of date, but the default Rails install any of the school machines (an environment likely for my exam) is basically 'gem install rails --version 2.2.2' and since they are windows machines, they come with all the normal windows ruby gems that comes with the ruby installer. However, I am running this off a Debian virtual machine of mine, but trying to set it up similarly and I figured the windows ruby gems wouldn't change anything in Rails. *** LOCAL GEMS *** actionmailer (2.2.2) actionpack (2.2.2) activerecord (2.2.2) activeresource (2.2.2) activesupport (2.2.2) gem_plugin (0.2.3) hpricot (0.8.2) linecache (0.43) log4r (1.1.7) ptools (1.1.9) rack (1.1.0) rails (2.2.2) rake (0.8.7) sqlite3-ruby (1.2.3) So any ideas on what the problem is? Thanks in advanced.

    Read the article

  • PostgreSQL triggers and passing parameters

    - by iandouglas
    This is a multi-part question. I have a table similar to this: CREATE TABLE sales_data ( Company character(50), Contract character(50), top_revenue_sum integer, top_revenue_sales integer, last_sale timestamp) ; I'd like to create a trigger for new inserts into this table, something like this: CREATE OR REPLACE FUNCTION add_contract() RETURNS VOID DECLARE myCompany character(50), myContract character(50), BEGIN myCompany = TG_ARGV[0]; myContract = TG_ARGV[1]; IF (TG_OP = 'INSERT') THEN EXECUTE 'CREATE TABLE salesdata_' || $myCompany || '_' || $myContract || ' ( sale_amount integer, updated TIMESTAMP not null, some_data varchar(32), country varchar(2) ) ;' EXECUTE 'CREATE TRIGGER update_sales_data BEFORE INSERT OR DELETE ON salesdata_' || $myCompany || '_' || $myContract || ' FOR EACH ROW EXECUTE update_sales_data( ' || $myCompany || ',' || $myContract || ', revenue);' ; END IF; END; $add_contract$ LANGUAGE plpgsql; CREATE TRIGGER add_contract AFTER INSERT ON sales_data FOR EACH ROW EXECUTE add_contract() ; Basically, every time I insert a new row into sales_data, I want to generate a new table where the name of the table will be defined as something like "salesdata_Company_Contract" So my first question is how can I pass the Company and Contract data to the trigger so it can be passed to the add_contract() stored procedure? From my stored procedure, you'll see that I also want to update the original sales_data table whenever new data is inserted into the salesdata_Company_Contract table. This trigger will do something like this: CREATE OR REPLACE FUNCTION update_sales_data() RETURNS trigger as $update_sales_data$ DECLARE myCompany character(50) NOT NULL, myContract character(50) NOT NULL, myRevenue integer NOT NULL BEGIN myCompany = TG_ARGV[0] ; myContract = TG_ARGV[1] ; myRevenue = TG_ARGV[2] ; IF (TG_OP = 'INSERT') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales + 1, top_revenue_sum = top_revenue_sum + $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; ELSIF (TG_OP = 'DELETE') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales - 1, top_revenue_sum = top_revenue_sum - $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; END IF; END; $update_sales_data$ LANGUAGE plpgsql; This will, of course, require that I pass several parameters around within these stored procedures and triggers, and I'm not sure (a) if this is even possible, or (b) practical, or (c) best practice and we should just put this logic into our other software instead of asking the database to do this work for us. To keep our table sizes down, as we'll have hundreds of thousands of transactions per day, we've decided to partition our data using the Company and Contract strings as part of the table names themselves so they're all very small in size; file IO for us is faster and we felt we'd get better performance. Thanks for any thoughts or direction. My thinking, now that I've written all of this out, is that maybe we need to write stored procedures where we pass our insert data as parameters, and call that from our other software, and have the stored procedure do the insert into "sales_data" then create the other table. Then, have a second stored procedure to insert new data into the salesdata_Company_Contract tables, where the table name is passed to the stored proc as a parameter, and again have that stored proc do the insert, then update the main sales_data table afterward. What approach would you take?

    Read the article

  • How to make condition inside getView of Custom BaseAdapter

    - by user1501150
    I want to make a Custom ListView with Custom BaseAdapter, where the the status=1,I want to show a CheckBox, and else I want to show a textView.. My given condition is: if (NewtheStatus == 1) { alreadyOrderText .setVisibility(TextView.GONE); } else{ checkBox.setVisibility(CheckBox.GONE); } But Some times I obtain some row that has neither checkBox nor TextView. The Code of my Custom BaseAdapter is given below . private class MyCustomAdapter extends BaseAdapter { private static final int TYPE_ITEM = 0; private static final int TYPE_ITEM_WITH_HEADER = 1; // private static final int TYPE_MAX_COUNT = TYPE_SEPARATOR + 1; private ArrayList<WatchListAllEntity> mData = new ArrayList(); private LayoutInflater mInflater; private ArrayList<WatchListAllEntity> items = new ArrayList<WatchListAllEntity>(); public MyCustomAdapter(Context context, ArrayList<WatchListAllEntity> items) { mInflater = (LayoutInflater) getSystemService(Context.LAYOUT_INFLATER_SERVICE); } public void addItem(WatchListAllEntity watchListAllEntity) { // TODO Auto-generated method stub items.add(watchListAllEntity); } public View getView(int position, View convertView, ViewGroup parent) { View v = convertView; final int position1 = position; if (v == null) { LayoutInflater vi = (LayoutInflater) getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = vi.inflate(R.layout.listitempict, null); } watchListAllEntity = new WatchListAllEntity(); watchListAllEntity = items.get(position); Log.i("position: iteamsLength ", position + ", " + items.size()); if (watchListAllEntity != null) { ImageView itemImage = (ImageView) v .findViewById(R.id.imageviewproduct); if (watchListAllEntity.get_thumbnail_image_url1() != null) { Drawable image = ImageOperations(watchListAllEntity .get_thumbnail_image_url1().replace(" ", "%20"), "image.jpg"); // itemImage.setImageResource(R.drawable.icon); if (image != null) { itemImage.setImageDrawable(image); itemImage.setAdjustViewBounds(true); } else { itemImage.setImageResource(R.drawable.iconnamecard); } Log.i("status_ja , status", watchListAllEntity.get_status_ja() + " ," + watchListAllEntity.getStatus()); int NewtheStatus = Integer.parseInt(watchListAllEntity .getStatus()); CheckBox checkBox = (CheckBox) v .findViewById(R.id.checkboxproduct); TextView alreadyOrderText = (TextView) v .findViewById(R.id.alreadyordertext); if (NewtheStatus == 1) { alreadyOrderText .setVisibility(TextView.GONE); } else{ checkBox.setVisibility(CheckBox.GONE); } Log.i("Loading ProccardId: ", watchListAllEntity.get_proc_card_id() + ""); checkBox.setOnCheckedChangeListener(new OnCheckedChangeListener() { public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { WatchListAllEntity watchListAllEntity2 = items .get(position1); Log.i("Position: ", position1 + ""); // TODO Auto-generated method stub if (isChecked) { Constants.orderList.add(watchListAllEntity2 .get_proc_card_id()); Log.i("Proc Card Id Add: ", watchListAllEntity2 .get_proc_card_id() + ""); } else { Constants.orderList.remove(watchListAllEntity2 .get_proc_card_id()); Log.i("Proc Card Id Remove: ", watchListAllEntity2.get_proc_card_id() + ""); } } }); } } return v; } private Drawable ImageOperations(String url, String saveFilename) { try { InputStream is = (InputStream) this.fetch(url); Drawable d = Drawable.createFromStream(is, "src"); return d; } catch (MalformedURLException e) { return null; } catch (IOException e) { return null; } } public Object fetch(String address) throws MalformedURLException, IOException { URL url = new URL(address); Object content = url.getContent(); return content; } public int getCount() { // TODO Auto-generated method stub int sizef=items.size(); Log.i("Size", sizef+""); return items.size(); } public Object getItem(int position) { // TODO Auto-generated method stub return items.get(position); } public long getItemId(int position) { // TODO Auto-generated method stub return position; } }

    Read the article

  • Entity Framework version 1- Brief Synopsis and Tips &ndash; Part 1

    - by Rohit Gupta
    To Do Eager loading use Projections (for e.g. from c in context.Contacts select c, c.Addresses)  or use Include Query Builder Methods (Include(“Addresses”)) If there is multi-level hierarchical Data then to eager load all the relationships use Include Query Builder methods like customers.Include("Order.OrderDetail") to include Order and OrderDetail collections or use customers.Include("Order.OrderDetail.Location") to include all Order, OrderDetail and location collections with a single include statement =========================================================================== If the query uses Joins then Include() Query Builder method will be ignored, use Nested Queries instead If the query does projections then Include() Query Builder method will be ignored Use Address.ContactReference.Load() OR Contact.Addresses.Load() if you need to Deferred Load Specific Entity – This will result in extra round trips to the database ObjectQuery<> cannot return anonymous types... it will return a ObjectQuery<DBDataRecord> Only Include method can be added to Linq Query Methods Any Linq Query method can be added to Query Builder methods. If you need to append a Query Builder Method (other than Include) after a LINQ method  then cast the IQueryable<Contact> to ObjectQuery<Contact> and then append the Query Builder method to it =========================================================================== Query Builder methods are Select, Where, Include Methods which use Entity SQL as parameters e.g. "it.StartDate, it.EndDate" When Query Builder methods do projection then they return ObjectQuery<DBDataRecord>, thus to iterate over this collection use contact.Item[“Name”].ToString() When Linq To Entities methods do projection, they return collection of anonymous types --- thus the collection is strongly typed and supports Intellisense EF Object Context can track changes only on Entities, not on Anonymous types. If you use a Defining Query for a EntitySet then the EntitySet becomes readonly since a Defining Query is the same as a View (which is treated as a ReadOnly by default). However if you want to use this EntitySet for insert/update/deletes then we need to map stored procs (as created in the DB) to the insert/update/delete functions of the Entity in the Designer You can use either Execute method or ToList() method to bind data to datasources/bindingsources If you use the Execute Method then remember that you can traverse through the ObjectResult<> collection (returned by Execute) only ONCE. In WPF use ObservableCollection to bind to data sources , for keeping track of changes and letting EF send updates to the DB automatically. Use Extension Methods to add logic to Entities. For e.g. create extension methods for the EntityObject class. Create a method in ObjectContext Partial class and pass the entity as a parameter, then call this method as desired from within each entity. ================================================================ DefiningQueries and Stored Procedures: For Custom Entities, one can use DefiningQuery or Stored Procedures. Thus the Custom Entity Collection will be populated using the DefiningQuery (of the EntitySet) or the Sproc. If you use Sproc to populate the EntityCollection then the query execution is immediate and this execution happens on the Server side and any filters applied will be applied in the Client App. If we use a DefiningQuery then these queries are composable, meaning the filters (if applied to the entityset) will all be sent together as a single query to the DB, returning only filtered results. If the sproc returns results that cannot be mapped to existing entity, then we first create the Entity/EntitySet in the CSDL using Designer, then create a dummy Entity/EntitySet using XML in the SSDL. When creating a EntitySet in the SSDL for this dummy entity, use a TSQL that does not return any results, but does return the relevant columns e.g. select ContactID, FirstName, LastName from dbo.Contact where 1=2 Also insure that the Entity created in the SSDL uses the SQL DataTypes and not .NET DataTypes. If you are unable to open the EDMX file in the designer then note the Errors ... they will give precise info on what is wrong. The Thrid option is to simply create a Native Query in the SSDL using <Function Name="PaymentsforContact" IsComposable="false">   <CommandText>SELECT ActivityId, Activity AS ActivityName, ImagePath, Category FROM dbo.Activities </CommandText></FuncTion> Then map this Function to a existing Entity. This is a quick way to get a custom Entity which is regular Entity with renamed columns or additional columns (which are computed columns). The disadvantage to using this is that It will return all the rows from the Defining query and any filter (if defined) will be applied only at the Client side (after getting all the rows from DB). If you you DefiningQuery instead then we can use that as a Composable Query. The Fourth option (for mapping a READ stored proc results to a non-existent Entity) is to create a View in the Database which returns all the fields that the sproc also returns, then update the Model so that the model contains this View as a Entity. Then map the Read Sproc to this View Entity. The other option would be to simply create the View and remove the sproc altogether. ================================================================ To Execute a SProc that does not return a entity, use a EntityCommand to execute that proc. You cannot call a sproc FunctionImport that does not return Entities From Code, the only way is to use SSDL function calls using EntityCommand.  This changes with EntityFramework Version 4 where you can return Scalar Types, Complex Types, Entities or NonQuery ================================================================ UDF when created as a Function in SSDL, we need to set the Name & IsComposable properties for the Function element. IsComposable is always false for Sprocs, for UDF's set this to true. You cannot call UDF "Function" from within code since you cannot import a UDF Function into the CSDL Model (with Version 1 of EF). only stored procedures can be imported and then mapped to a entity ================================================================ Entity Framework requires properties that are involved in association mappings to be mapped in all of the function mappings for the entity (Insert, Update and Delete). Because Payment has an association to Reservation... hence we need to pass both the paymentId and reservationId to the Delete sproc even though just the paymentId is the PK on the Payment Table. ================================================================ When mapping insert, update and delete procs to a Entity, insure that all the three or none are mapped. Further if you have a base class and derived class in the CSDL, then you must map (ins, upd, del) sprocs to all parent and child entities in the inheritance relationship. Note that this limitation that base and derived entity methods must all must be mapped does not apply when you are mapping Read Stored Procedures.... ================================================================ You can write stored procedures SQL directly into the SSDL by creating a Function element in the SSDL and then once created, you can map this Function to a CSDL Entity directly in the designer during Function Import ================================================================ You can do Entity Splitting such that One Entity maps to multiple tables in the DB. For e.g. the Customer Entity currently derives from Contact Entity...in addition it also references the ContactPersonalInfo Entity. One can copy all properties from the ContactPersonalInfo Entity into the Customer Entity and then Delete the CustomerPersonalInfo entity, finall one needs to map the copied properties to the ContactPersonalInfo Table in Table Mapping (by adding another table (ContactPersonalInfo) to the Table Mapping... this is called Entity Splitting. Thus now when you insert a Customer record, it will automatically create SQL to insert records into the Contact, Customers and ContactPersonalInfo tables even though you have a Single Entity called Customer in the CSDL =================================================================== There is Table by Type Inheritance where another EDM Entity can derive from another EDM entity and absorb the inherted entities properties, for example in the Break Away Geek Adventures EDM, the Customer entity derives (inherits) from the Contact Entity and absorbs all the properties of Contact entity. Thus when you create a Customer Entity in Code and then call context.SaveChanges the Object Context will first create the TSQL to insert into the Contact Table followed by a TSQL to insert into the Customer table =================================================================== Then there is the Table per Hierarchy Inheritance..... where different types are created based on a condition (similar applying a condition to filter a Entity to contain filtered records)... the diference being that the filter condition populates a new Entity Type (derived from the base Entity). In the BreakAway sample the example is Lodging Entity which is a Abstract Entity and Then Resort and NonResort Entities which derive from Lodging Entity and records are filtered based on the value of the Resort Boolean field =================================================================== Then there is Table per Concrete Type Hierarchy where we create a concrete Entity for each table in the database. In the BreakAway sample there is a entity for the Reservation table and another Entity for the OldReservation table even though both the table contain the same number of fields. The OldReservation Entity can then inherit from the Reservation Entity and configure the OldReservation Entity to remove all Scalar Properties from the Entity (since it inherits the properties from Reservation and filters based on ReservationDate field) =================================================================== Complex Types (Complex Properties) Entities in EF can also contain Complex Properties (in addition to Scalar Properties) and these Complex Properties reference a ComplexType (not a EntityType) DropdownList, ListBox, RadioButtonList, CheckboxList, Bulletedlist are examples of List server controls (not data bound controls) these controls cannot use Complex properties during databinding, they need Scalar Properties. So if a Entity contains Complex properties and you need to bind those to list server controls then use projections to return Scalar properties and bind them to the control (the disadvantage is that projected collections are not tracked by the Object Context and hence cannot persist changes to the projected collections bound to controls) ObjectDataSource and EntityDataSource do account for Complex properties and one can bind entities with Complex Properties to Data Source controls and they will be tracked for changes... with no additional plumbing needed to persist changes to these collections bound to controls So DataBound controls like GridView, FormView need to use EntityDataSource or ObjectDataSource as a datasource for entities that contain Complex properties so that changes to the datasource done using the GridView can be persisted to the DB (enabling the controls for updates)....if you cannot use the EntityDataSource you need to flatten the ComplexType Properties using projections With EF Version 4 ComplexTypes are supported by the Designer and can add/remove/compose Complex Types directly using the Designer =================================================================== Conditional Mapping ... is like Table per Hierarchy Inheritance where Entities inherit from a base class and then used conditions to populate the EntitySet (called conditional Mapping). Conditional Mapping has limitations since you can only use =, is null and IS NOT NULL Conditions to do conditional mapping. If you need more operators for filtering/mapping conditionally then use QueryView(or possibly Defining Query) to create a readonly entity. QueryView are readonly by default... the EntitySet created by the QueryView is enabled for change tracking by the ObjectContext, however the ObjectContext cannot create insert/update/delete TSQL statements for these Entities when SaveChanges is called since it is QueryView. One way to get around this limitation is to map stored procedures for the insert/update/delete operations in the Designer. =================================================================== Difference between QueryView and Defining Query : QueryView is defined in the (MSL) Mapping File/section of the EDM XML, whereas the DefiningQuery is defined in the store schema (SSDL). QueryView is written using Entity SQL and is this database agnostic and can be used against any database/Data Layer. DefiningQuery is written using Database Lanaguage i.e. TSQL or PSQL thus you have more control =================================================================== Performance: Lazy loading is deferred loading done automatically. lazy loading is supported with EF version4 and is on by default. If you need to turn it off then use context.ContextOptions.lazyLoadingEnabled = false To improve Performance consider PreCompiling the ObjectQuery using the CompiledQuery.Compile method

    Read the article

  • How to make sysctl network bridge settings persist after a reboot?

    - by Zack Perry
    I am setting up a notebook for software demo purpose. The machine has 8GB RAM, a Core i7 Intel CPU, a 128GB SSD, and runs Ubuntu 12.04 LTS 64bit. The notebook is used as a KVM host and runs a few KVM guests. All such guests use the virbr0 default bridge. To enable them to communicate with each other using multicast, I added the following to the host's /etc/sysctl.conf, as shown below net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Afterwards, following man sysctl(8), I issued the following: sudo /sbin/sysctl -p /etc/sysctl.conf My understanding is that this should make these settings persist over reboots. I tested it, and was surprised to find out the following: root@sdn1 :/proc/sys/net/bridge# more *tables :::::::::::::: bridge-nf-call-arptables :::::::::::::: 1 :::::::::::::: bridge-nf-call-ip6tables :::::::::::::: 1 :::::::::::::: bridge-nf-call-iptables :::::::::::::: 1 All defaults are coming back! Yes. I can use some kludgy "get arounds" such as putting a /sbin/sysctl -p /etc/sysctl.conf into the host's /etc/rc.local but I would rather "do it right". Did I misunderstand the man page or is there something that I missed? Thanks for any hints. -- Zack

    Read the article

  • Whats consuming HDD Space

    - by Umair Mustafa
    I have single partition of 92GB in which I installed Ubuntu 12.04. And for some Unknown reason a message pop ups saying that I only have 1GB of HDD space left. I ran command sudo du -hscx * on / and /home /home gave me this result 4.0K C:\nppdf32Log\debuglog.txt 0 convertedvideo.avi 176M Desktop 16K Documents 169M Downloads 4.0K examples.desktop 17M file.txt 4.0K Music 984K Pictures 4.0K Public 320K Red Hat 6.iso 2.5M syslog-ng_3.3.6.tar.gz 4.0K Templates 8.0K terminal.png 1.2M Thunderbird Attachments 698M ubuntu10.04LTS.iso 16K Ubuntu One 4.0K Untitled Folder 4.0K Videos 21G VirtualBox VMs 22G total And / gave me this result 81G home 0 initrd.img 0 initrd.img.old 833M lib 16K lost+found 68K media 4.0K mnt 260M opt du: cannot access `proc/8339/task/8339/fd/4': No such file or directory du: cannot access `proc/8339/task/8339/fdinfo/4': No such file or directory du: cannot access `proc/8339/fd/4': No such file or directory du: cannot access `proc/8339/fdinfo/4': No such file or directory 0 proc 640K root 908K run 8.6M sbin 4.0K selinux 4.0K srv 0 sys 148K tmp 3.3G usr 436M var 0 vmlinuz 0 vmlinuz.old 86G total If you look at the result returned by / it shows that /home is consuming 81GB but on the other hand /home returns only 22GB. I cant figure out whats consuming the HDD. I have not installed anything except Virtual Machines Perpetrator found using Disk Usage Analyzer

    Read the article

  • The battery indicator& Power setting panel shows wrong battery state

    - by Eastsun
    My laptop is Thinkpad E420 with Ubuntu 12.04 64-bit installed, the kernel version is 3.2.0-33-generic. I have set the battery threshold as 60% via windows7. It seems that the threshold auto effected in Ubuntu. However, there are some problems of the battery indicator's state. I'll list some information of the battery state as following: (Note that in terminal ubuntu says that battery charging state is charged, while the power setting panel shows that the battery state is charging as well as the battery indicator shows.) $ cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok *charging state: charged* present rate: 0 mW remaining capacity: 18200 mWh present voltage: 16103 mV battery indicator state Power Setting Panel Is there any way to fix the problem? Edit Add some result via *sudo fwts battery - battery.log * 3 passed, 4 failed, 0 warnings, 0 aborted, 0 skipped, 0 info only. Test Failure Summary =============================== Critical failures: NONE High failures: 2 battery: Did not detect any ACPI battery events. battery: Could not detect ACPI events for battery BAT0. Medium failures: 1 battery: Battery BAT0 claims it's charging but no charge is added Low failures: 1 battery: System firmware may not support cycle count interface or it reports it incorrectly for battery BAT0. Other failures: NONE Test |Pass |Fail |Abort|Warn |Skip |Info | ---------------+-----+-----+-----+-----+-----+-----+ battery | 3| 4| | | | | ---------------+-----+-----+-----+-----+-----+-----+ Total: | 3| 4| 0| 0| 0| 0| ---------------+-----+-----+-----+-----+-----+-----+ Any help would be appreciated!

    Read the article

  • IPV6 auto configuration not working

    - by Allan Ruin
    In Windows 7, my computer can automatically get a IPV6 global address and use IPV6 network, but in Ubuntu Natty, I can't find out how to let stateless configuration work. My network is a university campus network,so I don't need tunnels. I think if one thing can silently and successfully be accomplished in Windows, it shouldn't be impossible in linux. I tried manually editing /etc/network/interfaces and used a static IPV6 address, and I can use IPV6 this way, but I just want to use auto-configuration. I found this post: http://superuser.com/questions/33196/how-to-disable-autoconfiguration-on-ipv6-in-linux and tried sudo sysctl -w net.ipv6.conf.all.autoconf=1 sudo sysctl -w net.ipv6.conf.all.accept_ra=1 but without any luck. I got this in dmesg: root@natty-150:~# dmesg |grep IPv6 [ 26.239607] eth0: no IPv6 routers present [ 657.365194] eth0: no IPv6 routers present [ 719.101383] eth0: no IPv6 routers present [32864.604234] eth0: no IPv6 routers present [33267.619767] eth0: no IPv6 routers present [33341.507307] eth0: no IPv6 routers present I am not sure whether it matters,but then I setup a static IPv6 address (with gateway) and restart network,I ping6 ipv6.google.com and the ipv6 network is fine.This time a entry was added in dmesg [33971.214920] eth0: no IPv6 routers present So I guess the complain of no IPv6 router does not matter? Here is the ipv6 forwarding setting.But I guessed forwarding is used for radvd stuff? root@natty-150:/# cat /proc/sys/net/ipv6/conf/eth0/forwarding 0 After ajmitch mentioned forwarding setting, I added this to sysctl.conf file: net.ipv6.conf.all.autoconf = 1 net.ipv6.conf.all.accept_ra = 1 net.ipv6.conf.default.forwarding = 1 net.ipv6.conf.lo.forwarding = 1 net.ipv6.conf.eth0.forwarding = 1 and then ran sysctl -p /etc/init.d/networking restart But this still doesn't work.

    Read the article

  • Wake On Lan (WOL) for Realtek RTL8101E/RTL8102E

    - by Heisennberg
    I'm unsuccessfully trying to get Wake on Lan to work with my local server (IP Address : 192.168.0.2, distro Ubuntu 12.04.3 LTS) which has a Realtek RTL8101E/RTL8102E ethernet card. The computer sending the WOL is a Macbook Pro which is connected on the same network. Yet the server fails to start. Here what I have done so far : name@serverName ~ $ cat /proc/acpi/wakeup Device S-state Status Sysfs node HDEF S3 *disabled pci:0000:00:1b.0 PXSX S3 *disabled PXSX S0 *enabled pci:0000:04:00.0 PXSX S0 *disabled USB1 S3 *enabled pci:0000:00:1d.0 USB2 S3 *enabled pci:0000:00:1d.1 USB3 S3 *enabled pci:0000:00:1d.2 USB5 S3 *enabled pci:0000:00:1a.1 EHC1 S3 *enabled pci:0000:00:1d.7 EHC2 S3 *enabled pci:0000:00:1a.7 name@serverName ~ $ lspci ------ 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 01) ------ name@serverName ~ $ sudo ethtool eth0 Settings for eth0: Supported ports: [ TP MII ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Supported pause frame use: No Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Link partner advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full Link partner advertised pause frame use: Symmetric Receive-only Link partner advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: MII PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000033 (51) drv probe ifdown ifup Link detected: yes and I'm calling the WOL with : name@serverName ~ $ wakeonlan xx:xx:xx:xx:xx` Sending magic packet to 255.255.255.255:9 with xx:xx:xx:xx:xx I have succesfully activated the WOL option in my computer BIOS. Any idea ?

    Read the article

  • External microphone not working

    - by haireefairee
    gnome-volume-control does not recognise external hardware. My headphones work nonetheless, but an external microphone does not. External microphones used to work, but at times were temperamental - I would have to login or logout with or without microphone plugged in. I am running Ubuntu 10.04 LTS (Lucid Lynx) on an mSi U100 wind notebook with one Intel soundcard and trying to use a jack microphone which has worked previously. USB microphones have also been problematic. I have done the basics: Installed upgrades. Checked nothing is muted. Looked for the device on gnome-volume-control. Tried using a different microphone that works on a friends computer. Tested my microphone works when using a different computer. Checked my soundcard can be seen (cat /proc/asound/cards). I have done more complicated things: I have tried playing around with settings in alsamixer. Nothing is muted. I can adjust "mic" and "internal mic" regardless of whether an external microphone is plugged in. I have the choice of input source from "mic", "front mic", "line" and "CD". I've played around changing this and it hasn't helped. I only have one CAPTURE option. In gnome-sound-recorder I have the choice of line, microphone 1 and microphone 2. I have played around changing this option. None of these pick up sound from the external microphone. Microphone 2 is the microphone on my laptop which is bad quality. In gnome-sound-recorder I have the choice of different profiles, and changing this has not helped either. I have looked at gstreamer-properties but none of that seemed helpful. I don't know if there a way to check if these external devices are being picked up. I would like to make an external microphone work. Please help!

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >