Search Results

Search found 4118 results on 165 pages for 'attributes'.

Page 143/165 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • PHP get values from SimpleXMLElement array

    - by seanp2k
    I have this: [1]=> object(SimpleXMLElement)#6 (1) { ["@attributes"]=> array(14) { ["name"]=> string(5) "MySQL" ["acknowledged"]=> string(1) "1" ["comments"]=> string(1) "1" ["current_check_attempt"]=> string(1) "1" ["downtime"]=> string(1) "0" ["last_check"]=> string(19) "2010-05-01 17:57:00" ["markdown_filter"]=> string(1) "0" ["max_check_attempts"]=> string(1) "3" ["output"]=> string(42) "CRITICAL - Socket timeout after 10 seconds" ["perfdata_available"]=> string(1) "1" ["service_object_id"]=> string(3) "580" ["state"]=> string(8) "critical" ["state_duration"]=> string(6) "759439" ["unhandled"]=> string(1) "0" } } (I used var_dump($child) to generate that) How do I get the 'name' attribute out of there as a string? Here is my code: $xml = simplexml_load_string($results); foreach($xml->data->list as $child) { var_dump($child); echo $child->getName() . ": " . $child->name . "<br />"; }

    Read the article

  • Multiple collections tied to one base collection with filters and eventing

    - by damienc88
    I have a complex model served from my back end, which has a bunch of regular attributes, some nested models, and a couple of collections. My page has two tables, one for invalid items, and one for valid items. The items in question are from one of the nested collections. Let's call it baseModel.documentCollection, implementing DocumentsCollection. I don't want any filtration code in my Marionette.CompositeViews, so what I've done is the following (note, duplicated for the 'valid' case): var invalidDocsCollection = new DocumentsCollection( baseModel.documentCollection.filter(function(item) { return !item.isValidItem(); }) ); var invalidTableView = new BookIn.PendingBookInRequestItemsCollectionView({ collection: app.collections.invalidDocsCollection }); layout.invalidDocsRegion.show(invalidTableView); This is fine for actually populating two tables independently, from one base collection. But I'm not getting the whole event pipeline down to the base collection, obviously. This means when a document's validity is changed, there's no neat way of it shifting to the other collection, therefore the other view. What I'm after is a nice way of having a base collection that I can have filter collections sit on top of. Any suggestions?

    Read the article

  • How can I have a single helper work on different models passed to it?

    - by Angela
    I am probably going to need to refactor in two steps since I'm still developing the project and learning the use-cases as I go along since it is to scratch my own itch. I have three models: Letters, Calls, Emails. They have some similarilty, but I anticipate they also will have some different attributes as you can tell from their description. Ideally I could refactor them as Events, with a type as Letters, Calls, Emails, but didn't know how to extend subclasses. My immediate need is this: I have a helper which checks on the status of whether an email (for example) was sent to a specific contact: def show_email_status(contact, email) @contact_email = ContactEmail.find(:first, :conditions => {:contact_id => contact.id, :email_id => email.id }) if ! @contact_email.nil? return @contact_email.status end end I realized that I, of course, want to know the status for whether a call was made to a contact as well, so I wrote: def show_call_status(contact, call) @contact_call = ContactCall.find(:first, :conditions => {:contact_id => contact.id, :call_id => call.id }) if ! @contact_call.nil? return @contact_call.status end end I would love to be able to just have a single helper show_status where I can say show_status(contact,call) or show_status(contact,email) and it would know whether to look for the object @contact_call or @contact_email. Yes, it would be easier if it were just @contact_event, but I want to do a small refactoring while I get the program up and running, and this would make the ability to do a history for a given contact much easier. Thanks!

    Read the article

  • Why is my SAX handler returning an object with no values? I am setting it just fine

    - by Blankman
    I'm writing a SAX parser for an xml, and the object it returns doesn't have the values that I am setting in the events. My classes structure is like this: public class ProductSAXHandler extends DefaultHandler { private Product product; public ProductSAXHandler() { product = new Product(); } public Product ParseXmlFile(String xml) { SAXParserFactory spf = new ... XMLReader parser = .... parser.parse(xml); return product; } public void StartElement(....) { for(int ...) { // looping through attributes if(qName == "description" && name == "sku") { product.setSKU(value); } } } } When I am in debug mode, the value of product does get set, and I can see that the product's sku field has the correct value. But for some reason the product object returned is just a new Product object with no values set during the parsing. What am I doing wrong here? It must be me not understanding how these events are fired etc.

    Read the article

  • Overriding CSS properties for iframe width

    - by user2898989
    I'm trying to put an iframe into a webpage, but no matter what I try to put in either the iframe properties or the custom CSS section of the website builder (or how many times I try to add !important to anything from width to right-margin), I can't get the iframe to extend rightward further than the page's preset width. Here's an example of the page and iframe that I'm working with: http://fmlcapitalinvestment.com/Search_Properties.html I need that script/iframe to be wide enough to show the search area. It seems pointless to copy and paste code and attributes I've tried setting, because nothing I do seems to have any effect, but just for showing how much I have no idea what I'm doing, here's my iframe code: <iframe id="idxFrame" style="padding:0; margin:0; padding-top: 0px; overflow-x:auto; width:1000px!important; border:0px solid transparent; background-color:transparent; max-width:none!important; right-margin:-200px!important" frameborder="0" scrolling="on" src="http://www.themls.com/IDXNET/Default.aspx?wid=8MSsp7Pf9eI55yjkDuB%2blX5awn7LnnVXh5PNYhq2ImAEQL" width="1200px" height="900px"></iframe> The "Website Builder" that I'm forced to use to make these kinds of pages is infuriating, but it does have a "Custom CSS" area where I can input additional CSS information. Is there something I could generically use to set iframes to their own widths?

    Read the article

  • click event working in Chrome but error in Firefox and IE

    - by Hunter Stanchak
    I am writing a messaging system with jquery. When you click on a thread title with the class '.open_message', It opens a thread with all the messages for that thread via Ajax. My issue is that when the thread title is clicked it is not recognizing the id attribute for that specific thread title in firefox and IE. It works fine in chrome, though. Here is the code: $('.open_message').on('click', function(e) { $(this).parent().removeClass('unread'); $(this).parent().addClass('read'); $('.message_container').html(''); var theID = e.currentTarget.attributes[0].value; theID = theID.replace('#', ''); var url = '".$url."'; var dataString = 'thread_id=' + theID; $('.message_container').append('<img id=\"loading\" src=\"' + url + '/images/loading.gif\" width=\"30px\" />'); $.ajax({ type: 'POST', url: 'get_thread.php', data: dataString, success: function(result) { $('#loading').hide(); $('.message_container').append(result); } }); return false; }); Thanks for the help!

    Read the article

  • SQL Server: Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • jQuery: serializing array returns empty string

    - by John Smith
    I did not forget to add name attributes as is a common problem and yet my serialized form is returning an empty string. What am I doing wrong? HTML/javascript: <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/2.1.0/jquery.min.js"></script> <script> $( document ).ready( function() { $('#word_form').submit(function(e) { e.preventDefault(); console.log($(this).serialize()); //returns an empty string }); }); </script> </head> <body> <div id="wrapper"> <form name="word_form" id="word_form" method="POST"> <input type="image" name="thumbsUp" id="thumb1" value="1" src="http://upload.wikimedia.org/wikipedia/commons/8/87/Symbol_thumbs_up.svg" style="width:50px;height:50px;"> <input type="image" name="thumbsDown" id="thumb2" value="2" src="http://upload.wikimedia.org/wikipedia/commons/8/84/Symbol_thumbs_down.svg" style="width:50px;height:50px;"> </form> </div> </body> Thanks!

    Read the article

  • how to get tables of an access db into a list box using c#?

    - by fathatsme
    hi evry1! i needed to create a form in which i hav to browse and open mdb files --- i did this part usin oprnfile dialogue! private void button1_Click(object sender, EventArgs e) { OpenFileDialog oDlg = new OpenFileDialog(); oDlg.Title = "Select MDB"; oDlg.Filter = "MDB (*.Mdb)|*.mdb"; oDlg.RestoreDirectory = true; string dir = Environment.GetFolderPath(Environment.SpecialFolder.Desktop); oDlg.InitialDirectory = dir; DialogResult result = oDlg.ShowDialog(); if (result == DialogResult.OK) { textBox1.Text = oDlg.FileName.ToString(); } } **this is my code so far!!! now i need to make 3 list boxes!! 1st one to display the table names of the db! 2nd to to display field names when clicked on table name!!! 3rd to display attributes on fiels on clickin on it! v can edit the attribute values and on clickin of save button it should update the database!!! pls help** i'm new to C# so if u could pls help specifically on d doubt i asked it wud b really helpful to me

    Read the article

  • dynamic directives in angularjs

    - by user28061
    The directive's attributes don't change when the scope is updated, they still keep the initial value. What am I missing here? HTML <ul class="nav nav-pills nav-stacked" navlist> <navelem href="#!/notworking/{{foo}}"></navelem> <navelem href="#!/working">works great</navelem> </ul> <p>works: {{foo}}</p> Javascript (based on angular tabs example on front-page) angular.module('myApp.directives', []). directive('navlist', function() { return { scope: {}, controller: function ($scope) { var panes = $scope.panes = []; this.select = function(pane) { angular.forEach(panes, function(pane) { pane.selected = false; }); pane.selected = true; } this.addPane = function(pane) { if (panes.length == 0) this.select(pane); panes.push(pane); } } } }). directive('navelem', function() { return { require: '^navlist', restrict: 'E', replace: true, transclude: true, scope: { href: '@href' }, link: function(scope, element, attrs, tabsCtrl) { tabsCtrl.addPane(scope); scope.select = tabsCtrl.select; }, template: '<li ng-class="{active: selected}" ng-click="select(this)"><a href="{{href}}" ng-transclude></a></li>' }; });

    Read the article

  • JPA entity relations are not populated after .persist()

    - by Tomik
    Hello, this is a sample of my two entities: @Entity public class Post implements Serializable { @OneToMany(mappedBy = "post", fetch = javax.persistence.FetchType.EAGER) @OrderBy("revision DESC") public List<PostRevision> revisions; @Entity(name="post_revision") public class PostRevision implements Serializable { @ManyToOne public Post post; private Integer revision; @PrePersist private void prePersist() { List<PostRevision> list = post.revisions; if(list.size() >= 1) revision = list.get(list.size() - 1).revision + 1; else revision = 1; } So, there's a "post" and it can have several revisions. During persisting of the revision, entity takes a look at the list of the existing revisions and finds the next revision number. Problem is that Post.revisions is NULL but I think it should be automatically populated. I guess there's some kind of problem in my source code but I don't know where. Here's my "persistence" code: Post post = new Post(); PostRevision revision = new PostRevision(); revision.post = post; em.persist(post); em.persist(revision); em.flush(); I think that after persisting "post", it becomes "managed" and all the relations should be populated from now on. Thanks for help! (Note: public attributes are just for demonstration)

    Read the article

  • Mercurial on shared network drive?

    - by user1164199
    Right now I have my repo on my local drive. In order to back it up, I have to copy .hg to a window's network drive. At Is it a good idea to put Mercurial Repository in shared Network drive?, Lasse Karlsen said the repo shouldn't be on a shared folder on a network server because "mercurial cannot reliably hold locks in all situations". Would this still be an issue when the repository is only updated by a single user? If so, can someone explain to me why the corruption happens? A while back our IT had problem setting up a mercurial server. I am very fond of mercurial (it has a great interface and is very easy to work with), but if it's going to be such a pain in the neck to set up for multiple users, I am willing to look for something else. Does anyone have any suggestions (with reasons)? I am looking for a revision control program that has the following attributes: 2. Good interface (allow you to easily see revision and changes to the code over multiple revisions). 3. Work as a local repo or a network repo. 4. IT will feel comfortable installing on their network. Thanks, Stephen

    Read the article

  • Create rails record from two ids

    - by Michael Luby
    The functionality I'm trying to build allows Users to Visit a Restaurant. I have Users, Locations, and Restaurants models. Locations have many Restaurants. I've created a Visits model with user_id and restaurant_id attributes, and a visits_controller with create and destroy methods. Thing is, I can't create an actual Visit record. Any thoughts on how I can accomplish this? Or am I going about it the wrong way. Here's the code: Model: class Visit < ActiveRecord::Base attr_accessible :restaurant_id, :user_id belongs_to :user belongs_to :restaurant end View: <% @restaurants.each do |restaurant| %> <%= link_to 'Visit', location_restaurant_visits_path(current_user.id, restaurant.id), method: :create %> <% @visit = Visit.find_by_user_id_and_restaurant_id(current_user.id, restaurant.id) %> <%= @visit != nil ? "true" : "false" %> <% end %> Controller: class VisitsController < ApplicationController before_filter :find_restaurant before_filter :find_user def create @visit = Visit.create(params[:user_id => @user.id, :restaurant_id => @restaurant.id]) respond_to do |format| if @visit.save format.html { redirect_to location_restaurants_path(@location), notice: 'Visit created.' } format.json { render json: @visit, status: :created, location: @visit } else format.html { render action: "new" } format.json { render json: @visit.errors, status: :unprocessable_entity } end end end def destroy @visit = Visit.find(params[:user_id => @user.id, :restaurant_id => @restaurant.id]) @restaurant.destroy respond_to do |format| format.html { redirect_to location_restaurants_path(@restaurant.location_id), notice: 'Unvisited.' } format.json { head :no_content } end end private def find_restaurant @restaurant = Restaurant.find(params[:restaurant_id]) end def find_user @user = current_user end end

    Read the article

  • How to arrange HTML5 web page elements?

    - by Argus9
    I'm trying to make a sample web page to get acquainted with HTML5, and I'd like to try replicating Facebook's page layout; that is, the header that spans the entire width of the screen, a small footer at the bottom, and a three-column main body, consisting of a list of links on the left, the main content in the middle, and an optional section on the right (for ads, frames, etc.). It's neat and displays well in multiple window sizes. So far, I've tried to accomplish this with a <header>, <footer> and a <nav> and <section> block, respectively. There's a few anomalies with the page, however. The footer (which contains a simple text block with copyright info) appears at the top-right of the page below the header when the window is maximized. On the other hand, when there isn't enough space to display everything in the window, it places the main body text below the section. In other words, it keeps moving elements around to fit the window. Could someone please tell me how I'd achieve the look I'm going for? I've tried playing around with a few CSS attributes I read about through Google, but I'm pretty sure I don't know what I'm doing, and could really use some guidance. Thank you!

    Read the article

  • passwd ldap request to ActiveDirectory fails on half of 2500 users

    - by groovehunter
    We just setup ActiveDirectory in my company and imported all linux users and groups. On the linux client: (configured to ask ldap in nsswitch.conf): If i do a common ldapsearch to the AD ldap server i get the complete number of about 2580 users. But if i do this it only gets a part of all users, 1221 in number: getent passwd | wc -l Running it with strace shows kind of attempt to reconnect My ideas were: Does the linux authentication procedure run ldapsearch with a parameter incompatible to AD ldap ? Or probably it is a encoding issue. The windows user are entered in AD with all kind of characters. Maybe someone could shed light on this and give a hint how to debug that further!? Here's our ldap.conf host audc01.mycompany.de audc03.mycompany.de base ou=location,dc=mycompany,dc=de ldap_version 3 binddn cn=manager,ou=location,dc=mycompany,dc=de bindpw Password timelimit 120 idle_timelimit 3600 nss_base_passwd cn=users,cn=import,ou=location,dc=mycompany,dc=de?sub nss_base_group ou=location,dc=mycompany,dc=de?sub # RFC 2307 (AD) mappings nss_map_objectclass posixAccount User # nss_map_objectclass shadowAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute cn sAMAccountName # Display Name nss_map_attribute gecos cn ## nss_map_attribute homeDirectory unixHomeDirectory nss_map_attribute loginShell msSFU30LoginShell # PAM attributes pam_login_attribute sAMAccountName # Location based login pam_groupdn CN=Location-AU-Login,OU=au,OU=Location,DC=mycompany,DC=de pam_member_attribute msSFU30PosixMember ## pam_lookup_policy yes pam_filter objectclass=User nss_initgroups_ignoreusers avahi,avahi-autoipd,backup,bin,couchdb,daemon,games,gdm,gnats,haldaemon,hplip,irc,kernoops,libuuid,list,lp,mail,man,messagebus,news,proxy,pulse,root,rtkit,saned,speech-dispatcher,statd,sync,sys,syslog,usbmux,uucp,www-data and here the stacktrace from strace getent passwd poll([{fd=4, events=POLLIN|POLLPRI|POLLERR|POLLHUP}], 1, 120000) = 1 ([{fd=4, revents=POLLIN}]) read(4, "0\204\0\0\0A\2\1", 8) = 8 read(4, "\4e\204\0\0\0\7\n\1\0\4\0\4\0\240\204\0\0\0+0\204\0\0\0%\4\0261.2."..., 63) = 63 stat64("/etc/ldap.conf", {st_mode=S_IFREG|0644, st_size=1151, ...}) = 0 geteuid32() = 12560 getsockname(4, {sa_family=AF_INET, sin_port=htons(60334), sin_addr=inet_addr("10.1.35.51")}, [16]) = 0 getpeername(4, {sa_family=AF_INET, sin_port=htons(389), sin_addr=inet_addr("10.1.5.81")}, [16]) = 0 time(NULL) = 1297684722 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 munmap(0xb7617000, 1721) = 0 close(3) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 rt_sigaction(SIGPIPE, {SIG_DFL, [], 0}, NULL, 8) = 0 rt_sigaction(SIGPIPE, {SIG_IGN, [], 0}, {SIG_DFL, [], 0}, 8) = 0 write(4, "0\5\2\1\5B\0", 7) = 7 shutdown(4, 2 /* send and receive */) = 0 close(4) = 0 shutdown(-1, 2 /* send and receive */) = -1 EBADF (Bad file descriptor) close(-1) = -1 EBADF (Bad file descriptor) exit_group(0) = ?

    Read the article

  • Rotate screen using xrandr on Solaris 10

    - by sixtyfootersdude
    How do I call the xrandr command? I want to rotate my screen 90 deg. clockwise. Here is the usage: % xrandr -help usage: xrandr [options] where options are: -display <display> or -d <display> -help -o <normal,inverted,left,right,0,1,2,3> or --orientation <normal,inverted,left,right,0,1,2,3> -q or --query -s <size>/<width>x<height> or --size <size>/<width>x<height> -r <rate> or --rate <rate> or --refresh <rate> -v or --version -x (reflect in x) -y (reflect in y) --screen <screen> --verbose --dryrun --prop or --properties --fb <width>x<height> --fbmm <width>x<height> --dpi <dpi>/<output> --output <output> --auto --mode <mode> --preferred --pos <x>x<y> --rate <rate> or --refresh <rate> --reflect normal,x,y,xy --rotate normal,inverted,left,right --left-of <output> --right-of <output> --above <output> --below <output> --same-as <output> --set <property> <value> --off --crtc <crtc> --newmode <name> <clock MHz> <hdisp> <hsync-start> <hsync-end> <htotal> <vdisp> <vsync-start> <vsync-end> <vtotal> [+HSync] [-HSync] [+VSync] [-VSync] --rmmode <name> --addmode <output> <name> --delmode <output> <name> This is what I tried: % xrandr -o left X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 159 (RANDR) Minor opcode of failed request: 2 () Serial number of failed request: 16 Current serial number in output stream: 16 I am running Solaris 10.

    Read the article

  • IIS can't load Oracle.Web assembly (for ASP.NET membership provider)

    - by Konamiman
    I am trying to configure an IIS web site to use an Oracle database for ASP.NET membership, but I can't get it to work. IIS doesn't seem to be able to load the assembly containing the Oracle membership provider. That's what I have so far: An Oracle 10g database online and with all the tables for ASP.NET membership created. Windows 2008 R2 Standard with the web server role installed, including support for ASP.NET. Oracle 11g Release 2 ODAC 11.2.0.1.2 installed. The installed components are: Oracle data provider for .NET, Oracle providers for ASP.NET, Oracle instant client. The default web site on IIS (I am using that for testing) has the following web.config file: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <membership defaultProvider="OracleMembershipProvider"> <providers> <remove name="SqlMembershipProvider" /> <add name="OracleMembershipProvider" type="Oracle.Web.Security.OracleMembershipProvider, Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342" connectionStringName="OracleServer" /> </providers> </membership> </system.web> </configuration> (Additional attributes on the "add" element omitted for brevity. Also, the connection string is defined for the whole server.) The Oracle.Web.dll file is on the GAC. That's the relevant part of the C:\Windows\Assembly folder: The web site application pool is configured for .NET 2.0, and has 32-bit applications enabled. I have allowed untrusted providers in the IIS' administration.config file (just for the sake of testing, I'll explicitly add the assembly to the trusted providers list later). With all of this setup in place, when I click on the ".NET Users" icon on the IIS manager, I get a warning about the provider having too much privileges, and when I accept I get the following message: There was an error while performing this operation. Details: Could not load file or assembly 'Oracle.Web, Version=2.112.1.2, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. The system cannot find the file specified. So, what am I missing? How can I get the Oracle membership provider to work? Thank you! UPDATE: It seems that the problem is not with IIS itself, but with the IIS administrator only. When using the web site configuration tool provided by Visual Studio, everything works fine.

    Read the article

  • IPSEC site-to-site Openswan to Cisco ASA

    - by Jim
    I recieved a list of commands that were run on the right side of the VPN tunnel which is where the Cisco ASA resides. On my side, I have a linux based firewall running debian with openswan installed. I am having an issue with getting to Phase 2 of the VPN negotiation. Here is the Cisco Information I was sent: {my_public_ip} = left side of connection tunnel-group {my_public_ip} type ipsec-l2l tunnel-group {my_public_ip} ipsec-attributes pre-shared-key fakefake crypto map vpn1 1 match add customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside static (outside,inside) 10.2.1.200 {my_public_ip} netmask 255.255.255.255 crypto ipsec transform-set aes-256-sha esp-aes-256 esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map vpn1 1 match address customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 Myside ipsec.conf config setup klipsdebug=none plutodebug=none protostack=netkey #nat_traversal=yes conn cisco #name of VPN connection type=tunnel authby=secret #left side (myside) left={myPublicIP} leftsubnet=172.16.250.0/24 #net subnet on left sdie to assign to right side leftnexthop=%defaultroute #right security gateway (ASA side) right={CiscoASA_publicIP} #cisco ASA rightsubnet=10.2.1.0/24 rightnexthop=%defaultroute #crypo stuff keyexchange=ike ikelifetime=86400s auth=esp pfs=no compress=no auto=start ipsec.secrets file {CiscoASA_publicIP} {myPublicIP}: PSK "fakefake" When I start ipsec from the left side/my side I don't recieve any errors, however when I run the ipsec auto --status command: 000 "cisco": 172.16.250.0/24==={left_public_ip}<{left_public_ip}>[+S=C]---{left_public_ip_gateway}...{left_public_ip_gateway}--{right_public_ip}<{right_public_ip}>[+S=C]===10.2.1.0/24; prospective erouted; eroute owner: #0 000 "cisco": myip=unset; hisip=unset; 000 "cisco": ike_life: 86400s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cisco": policy: PSK+ENCRYPT+TUNNEL+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,24; interface: eth0; 000 "cisco": newest ISAKMP SA: #0; newest IPsec SA: #0; 000 000 #2: "cisco":500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 10s; nodpd; idle; import:admin initiate 000 #2: pending Phase 2 for "cisco" replacing #0 Now I'm new to setting up an site-to-site IPSEC tunnel so the status informatino I am unsure what it means. All I know is it sits at this "pending Phase 2" and I can't ping the other side, Another question I have is, if I do a route -n, should I see anything relating to this connection? Also, I read a few artilcle where configs contained the interface="ipsec0=eth0", is this an interface that I have to create on the linux debian firewall on my side? Appreciate your time to look at this.

    Read the article

  • GeForce 8800GT not even giving basic output

    - by Sam
    My Dad bought a GeForce 8800GT graphics card quite a long time ago now. It has never worked in his PC. Print out from a dxdiag: System Information Time of this report: 4/13/2010, 19:52:40 Machine name: USER-PC Operating System: Windows Vista™ Home Premium (6.0, Build 6001) Service Pack 1 (6001.vistasp1_gdr.091208-0542) Language: English (Regional Setting: English) System Manufacturer: To Be Filled By O.E.M. System Model: To Be Filled By O.E.M. BIOS: Default System BIOS Processor: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (4 CPUs), ~2.3GHz Memory: 2046MB RAM Page File: 1045MB used, 3296MB available Windows Dir: C:\Windows DirectX Version: DirectX 10 DX Setup Parameters: Not found DxDiag Version: 6.00.6001.18000 32bit Unicode DxDiag Notes Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Sound Tab 3: No problems found. Input Tab: No problems found. DirectX Debug Levels Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) Display Devices Card name: ATI Radeon HD 2400 PRO Manufacturer: ATI Technologies Inc. Chip type: ATI Radeon Graphics Processor (0x94C3) DAC type: Internal DAC(400MHz) Device Key: Enum\PCI\VEN_1002&DEV_94C3&SUBSYS_00000000&REV_00 Display Memory: 1012 MB Dedicated Memory: 245 MB Shared Memory: 767 MB Current Mode: 1280 x 960 (32 bit) (75Hz) Monitor: Generic PnP Monitor Driver Name: atiumdag.dll,atiumdva.dat,atitmmxx.dll Driver Version: 7.14.0010.0523 (English) DDI Version: 10 Driver Attributes: Final Retail Driver Date/Size: 8/22/2007 02:43:14, 3021312 bytes That info is from the current card that is installed in it and has been installed since its purchase roughly 3-4 years ago. When I physically install the card I put it into a purple slot on the motherboard that the old card was in (if I go into the device manager and select properties on the current card it confirms that the slot is a "PCI Slot 16 (PCI bus 2, device 0, function 0)") and boot up the computer but get absolutely no output. The screen that we have registers that it is connected to something (by not displaying the screen it does when the cable is unplugged) but just remains blank, no output at all. I recently took the card to my University and one of my friends who is better with hardware issues than I am tried it in his system and it worked perfectly. No issues whatsoever. I do not have a spec list for his system but I could get one if you need it. If you need any more information on this issue I will be happy to supply you with it as I am starting to get very annoyed with this problem ^_^

    Read the article

  • Windows 7 file-based backup service

    - by Ben Voigt
    I'm looking for a good replacement for Lazy Mirror, since it doesn't support Windows 7 well. Pros: One of the things I really loved about Lazy Mirror is that it always maintains a "full" backup, but does so by only copying modified files. As each file was copied, the old version got archived (moved to an out-of-the-way location). So after mirroring ran, there'd be a complete copy of the file system, which could even be booted if necessary. At the same time, extra space on the backup media was used to store as many older versions of files as possible, without wasting space storing multiple copies of the same version. It seems that with Windows 7 backup, there'd be wasted space storing the same data in both the system image and file backup. It was completely file-based, but also aware of the registry (it had a feature to dump the live registry to hive files in the correct format). The backups were normal NTFS filesystems, no special tool was needed to read them. It automatically cleaned out the oldest previous versions when space ran out (unlike Windows 7 backup which apparently simply starts failing the the backup media fills.) It copied all file attributes including security. Cons: It doesn't deal well with junction points, symbolic links, and hard links. It didn't run as a service without lots of help from firesrv or srvany, and then you couldn't interact with the GUI. Running as a service was necessary to be able to mirror protected OS files. It didn't have open file handling, except for registry hives. I guess that the file-by-file archive and replacement could leave mismatched sets of files, if the mirror was interrupted. This would be the advantage of incremental backup techniques that require old full backup + all intermediate incremental backups to restore. But I don't see this as presenting much of a problem, you'd really only have a boot failure if you had a mixture of pre- and post-service pack files, and I can run a full image backup using another tool before applying a service pack. Does anyone know of a tool that does both full-system backup and storage of old versions of files like Lazy Mirror did (without storing the same data multiple times), and also can run as a service in Windows 7? Free is best of course, but a reasonably priced paid program (e.g. It would be absolutely awesome if it also triggered a backup/mirror pass when a particular external drive was plugged in and generated popup warnings if backups hadn't been run recently)

    Read the article

  • How to create an EFI System Partition?

    - by Alex Popov
    TL; DR How do I create an EFI system partition from scratch? How do I put the EFI firmware on it onces it is created? Long version I hava Toshiba T430 laptop. I received it with Windows 7 installed (but I think originally it has shipped with Windows 8). I installed Ubuntu on it, but deleted some partitions on the disk so that I ended up wiping out the Windows and only having Ubuntu. Among the deleted partitions was the EFI System partition. I discovered that Ubuntu now boots in Legacy mode (and not UEFI). I am trying to follow this guide on converting my Ubuntu installation from Legacy to UEFI. The problem - since there is no EFI partition whenever I choose from BIOS to boot using UEFI I cannot boot. That counts not only for the harddrive, but usb and DVD as well. I think this is logical - it expects an EFI partition and since it can't find it, it cannot continue booting futher, be it from HDD or DVD. So how do I recreate the EFI partition? The guide above says: Creating an EFI partition If you are manually partitioning your disk in the Ubuntu installer, you need to make sure you have an EFI partition set up. If your disk already contains an EFI partition (eg if your computer had Windows8 preinstalled), it can be used for Ubuntu too. Do not format it. It is strongly recommended to have only 1 EFI partition per disk. An EFI partition can be created via a recent version of GParted (the Gparted version included in the 12.04 disk is OK), and must have the following attributes: Mount point: /boot/efi (remark: no need to set this mount point when using the manual partitioning, the Ubuntu installer will detect it automatically) Size: minimum 100Mib. 200MiB recommended. Type: FAT32 Other: needs a "boot" flag. I had some trouble creating this partition: I boot from a live Ubuntu DVD, open GParted, create a 200MB partition and format it to FAT32. In GParted I cannot set the mount point and thus cannot set the bootflag. I didn't set the mount point in /etc/fstab since it's a live CD and fstab looked quite differently from what I expected compared to a normal boot. Anyway, I just didn't know what values to set. I booted again via the live DVD and then chose to install Ubuntu. I then created a partition with the mentioned criteria - mount point, 200MB, FAT32, boot flag. However, I continue to have this problem and I suppose it's because on that partition there is no EFI firmware, it's just an empty partition, which is suitable to have EFI firmware. So again, how do I create an EFI partition, which has the EFI software, so that the laptop can once again boot in UEFI mode?

    Read the article

  • HAproxy roundrobin balancing does not appear to be distributing evently

    - by andrew
    Hello, I know that with loaded servers, roundrobin in HAproxy (1.4.4) does not evenly distribute, but my servers are currently getting NO traffic (test setup), and roundrobin balancing does www1,www1,www1,www1,www1,...www2,www2,www2,...,www1... I'm verifying this by having the script being run on each server cat /etc/HOSTNAME (slackware). I need to have it switch back and forth each time to test some session stuff (stored in shared memcached) but am having trouble getting it to switch between my two web servers on each request. global log 127.0.0.1 local0 warning maxconn 4096 chroot /usr/share/haproxy pidfile /var/run/haproxy.pid uid 99 gid 99 daemon defaults balance roundrobin fullconn 100 maxconn 4096 mode http option dontlognull option http-server-close option forwardfor option redispatch retries 3 timeout connect 5000 timeout client 20000 timeout server 60000 timeout queue 60000 stats enable stats uri /haproxy stats auth ***:*** frontend www *:80 log global acl is_upload hdr_dom(host) -i uploads.site.com acl is_api hdr_dom(host) -i api.site.com acl is_dev hdr_dom(host) -i dev.site.com acl is_apidev hdr_dom(host) -i apidev.site.com use_backend uploads.site.com if is_upload use_backend api.site.com if is_api use_backend dev.site.com if is_dev !is_apidev default_backend site.com backend site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend api.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:api.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend dev.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:dev.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 backend uploads.site.com option httpchk HEAD /alive.php HTTP/1.1\r\nHost:uploads.site.com server www1 1.1.1.1:8080 weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 server www2 1.1.1.2:8080 backup weight 10 minconn 5 maxconn 25 check inter 2000 rise 2 fall 2 So basically, I have some different back-ends (I've verified the ACLs are working), with the default option "roundrobin" selected. I've tried removing weights, removing the minconn/maxconn/fullconn attributes for all servers (not just the backend I'm testing), tried removing the ACLs, etc. I've been testing on dev.site.com BTW. Anyone see a reason why I can't get something like www1,www2,www1,www2,...? Also, this is one of my first questions on here, so please let me know if I left anything needed out of my post. Thanks!

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • How do you backup 40+ Centos5.5 servers?

    - by John Little
    We are embarrassed to ask this question. Apologies for our lack of UNIX expertise. We have inherited 40+ centos 5.5 servers, and don't know how to back them up. We need low level clone type images so that we could restore the servers from scratch if we had to replace the HDs etc. We have used the "dd" command, but we assume this only works if you want to back up one local disk to another, not 40 servers to one server with an external USB HD attached. All 40 servers have a pair of mirrored disks (dont know if its HW or SW raid). Most only have 100MB used. SErvers are running apache, zend, tomcat, mysql etc. Ideally we dont want to have to shut them down to backup (but could). We assume that standard unix commands like tar, cpio, rsync, scp etc. are of no use as they only copy files, not partitions, all attributes, groups etc. i.e. do not produce a result which can simply be re-imaged to a new HD to get the serer back from dead. We have a large SAN, a spare windows box and spare unix boxes, but these are only visible to one layer in the network. We have an unused Dell DL2000 monster tape unit, but no sw or documentation for it. WE have a copy of symantec backup exec, but we have no budget for unix client licenses. (The company has negative amounts of money). We need to be able to initiate the backup remotely, as we can only access the servers in person in an emergency (i.e. to restore) Googling returns some applications to do this, e.g. clonezilla - looks difficult to install and invasive. Mondo, only seems to support backup if you are local to the machine. Amanda might be an option, but looks like days/weeks of work to learn and setup? Is there anything built into Centos, or do we have to go the route of installing, learning and configuring a set of backup softwares? Any ideas? This must be a pretty standard problem which goggling doesnt give an obvious answer.

    Read the article

  • WCF REST on .Net 4.0

    - by AngelEyes
    A simple and straight forward article taken from: http://christopherdeweese.com/blog2/post/drop-the-soap-wcf-rest-and-pretty-uris-in-net-4 Drop the Soap: WCF, REST, and Pretty URIs in .NET 4 Years ago I was working in libraries when the Web 2.0 revolution began.  One of the things that caught my attention about early start-ups using the AJAX/REST/Web 2.0 model was how nice the URIs were for their applications.  Those were my first impressions of REST; pretty URIs.  Turns out there is a little more to it than that. REST is an architectural style that focuses on resources and structured ways to access those resources via the web.  REST evolved as an “anti-SOAP” movement, driven by developers who did not want to deal with all the complexity SOAP introduces (which is al lot when you don’t have frameworks hiding it all).  One of the biggest benefits to REST is that browsers can talk to rest services directly because REST works using URIs, QueryStrings, Cookies, SSL, and all those HTTP verbs that we don’t have to think about anymore. If you are familiar with ASP.NET MVC then you have been exposed to rest at some level.  MVC is relies heavily on routing to generate consistent and clean URIs.  REST for WCF gives you the same type of feel for your services.  Let’s dive in. WCF REST in .NET 3.5 SP1 and .NET 4 This post will cover WCF REST in .NET 4 which drew heavily from the REST Starter Kit and community feedback.  There is basic REST support in .NET 3.5 SP1 and you can also grab the REST Starter Kit to enable some of the features you’ll find in .NET 4. This post will cover REST in .NET 4 and Visual Studio 2010. Getting Started To get started we’ll create a basic WCF Rest Service Application using the new on-line templates option in VS 2010: When you first install a template you are prompted with this dialog: Dude Where’s my .Svc File? The WCF REST template shows us the new way we can simply build services.  Before we talk about what’s there, let’s look at what is not there: The .Svc File An Interface Contract Dozens of lines of configuration that you have to change to make your service work REST in .NET 4 is greatly simplified and leverages the Web Routing capabilities used in ASP.NET MVC and other parts of the web frameworks.  With REST in .NET 4 you use a global.asax to set the route to your service using the new ServiceRoute class.  From there, the WCF runtime handles dispatching service calls to the methods based on the Uri Templates. global.asax using System; using System.ServiceModel.Activation; using System.Web; using System.Web.Routing; namespace Blog.WcfRest.TimeService {     public class Global : HttpApplication     {         void Application_Start(object sender, EventArgs e)         {             RegisterRoutes();         }         private static void RegisterRoutes()         {             RouteTable.Routes.Add(new ServiceRoute("TimeService",                 new WebServiceHostFactory(), typeof(TimeService)));         }     } } The web.config contains some new structures to support a configuration free deployment.  Note that this is the default config generated with the template.  I did not make any changes to web.config. web.config <?xml version="1.0"?> <configuration>   <system.web>     <compilation debug="true" targetFramework="4.0" />   </system.web>   <system.webServer>     <modules runAllManagedModulesForAllRequests="true">       <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule,            System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     </modules>   </system.webServer>   <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>     <standardEndpoints>       <webHttpEndpoint>         <!--             Configure the WCF REST service base address via the global.asax.cs file and the default endpoint             via the attributes on the <standardEndpoint> element below         -->         <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>       </webHttpEndpoint>     </standardEndpoints>   </system.serviceModel> </configuration> Building the Time Service We’ll create a simple “TimeService” that will return the current time.  Let’s start with the following code: using System; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; namespace Blog.WcfRest.TimeService {     [ServiceContract]     [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]     public class TimeService     {         [WebGet(UriTemplate = "CurrentTime")]         public string CurrentTime()         {             return DateTime.Now.ToString();         }     } } The endpoint for this service will be http://[machinename]:[port]/TimeService.  To get the current time http://[machinename]:[port]/TimeService/CurrentTime will do the trick. The Results Are In Remember That Route In global.asax? Turns out it is pretty important.  When you set the route name, that defines the resource name starting after the host portion of the Uri. Help Pages in WCF 4 Another feature that came from the starter kit are the help pages.  To access the help pages simply append Help to the end of the service’s base Uri. Dropping the Soap Having dabbled with REST in the past and after using Soap for the last few years, the WCF 4 REST support is certainly refreshing.  I’m currently working on some REST implementations in .NET 3.5 and VS 2008 and am looking forward to working on REST in .NET 4 and VS 2010.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >