Search Results

Search found 20887 results on 836 pages for 'instance fields'.

Page 21/836 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • Excel 2007 | Remove blank fields from pivot tables

    - by answertips
    Every time I create a pivot table (available for all Excel versions) I get one or several blank fields. How can I get rid of them? One workaround I used was to select the blank field, right click | Filter | Hide Selected Items. This can solve my problem but I need to do it manually... Is there a way to automatically hide/exclude the blanks?

    Read the article

  • SQL cluster instance names for large project

    - by Sam
    We're setting up two clusters. One dev and one prod. The Production will host two SQL instances - a OLTP and a DW. The development will host 4 OLTP non-production environments and at least one DW non-production. We're working on getting more DW non-prods and possibly more OLTP systems. I'm considering a naming scheme like this, where PROJ would be 3 initials for the project name. Dev Cluster MSSQLPROJD1\D1 (DEV) MSSQLPROJD2\D2 (TEST) MSSQLPROJD3\D3 (QA) MSSQLPROJD4\D4 (STAGE) MSSQLPROJD5\D5 (DW) Prd Cluster MSSQLPROJP1\P1 (PRD) MSSQLPROJP2\P2 (DW) To the left of the slash, each name must be unique network wide. On each server, the instance name, to the right of the slash, must be unique. Any thoughts on this? I'm trying to avoid having instance names drifting from reality as the project progresses - say we change what we call a certain environment or want to repurpose one. Then we can update a listing of the purposes for the instances and be done with it. How has a scheme like this worked out for you? Maybe you do things another way in your shop - tell me about it. Thanks.

    Read the article

  • Configuring a MySQL 5.1 Instance on Windows 7 Professional x64 Fails

    - by Thomas Owens
    I'm trying to set up my laptops to function as mobile development environments. Installing the software on my Linux machine and getting it configured was fairly straightforward, however I'm having trouble getting MySQL 5.1 Server installed and configured on Windows 7 Professional 64-bit. I'm currently using the Windows MSI Installer for the complete MySQL 5.1 system (as opposed to the Essentials installer also available). I've tried to install using both the 32-bit and 64-bit versions of MySQL 5.1 - the same events occur in both. I've installed both the Server Instance Configuration Wizard and Workbench and everything appears to be installed just fine. When I open the Instance Configuration Wizard, I select Detailed Configuration. On the next screen, I select Development Environment, then Multifunctional Database on the next screen. I leave the InnoDB settings unchanged. I select Manual Setting with 5 concurrent connections. I enable TCP/IP Networking on Port 3306 and Enable Strict Mode. I select the Standard Character Set. I check the boxes for Install as a Windows Service (and provide the name "MySQL") and Include the Bin Directory in Windows PATH. On the next screen, I set my root user name and password. I do not enable root access from remote machines and I also do not create an anonymous account. On the final screen of the wizard, when I click "Execute", the first two tasks (Prepare Configuration and Write Configuration File) complete. However, when it reaches Start Service, the wizard hangs and becomes unresponsive ("Not Responding" appears in the title bar and Task Manager). I would really like to be able to use both my Windows and Linux laptops as full-blown mobile development environments, but I can't do that without being able to run MySQL. Has anyone encountered this problem before? What options do I have to correct it?

    Read the article

  • sysbench memory test on ec2 small instance

    - by caribio
    I'm seeing a problem with sysbench memory test (the default version that's compiled in). This is on Ubuntu Maverick, sysbench installed via apt-get install sysbench. Running the same thing on Ubuntu @ Rackspace worked just as expected. While the CPU and I/O tests worked fine on EC2 servers, the memory test just runs without doing anything (notice the 0M in the test results). The instance used was the publicly available 'stock' Ubuntu image with no changes to it: ./ec2-run-instances ami-ccf405a5 --instance-type m1.small --region us-east-1 --key mykey Supplying more arguments (such as: --memory-block-size=1K --memory-total-size=102400M) didn't help. What am I doing wrong? Thanks. sysbench --num-threads=4 --test=memory run sysbench 0.4.12: multi-threaded system evaluation benchmark Running the test with following options: Number of threads: 4 Doing memory operations speed test Memory block size: 1K Memory transfer size: 0M Memory operations type: write Memory scope type: global Threads started! Done. Operations performed: 0 ( 0.00 ops/sec) 0.00 MB transferred (0.00 MB/sec) Test execution summary: total time: 0.0003s total number of events: 0 total time taken by event execution: 0.0000 per-request statistics: min: 18446744073709.55ms avg: 0.00ms max: 0.00ms Threads fairness: events (avg/stddev): 0.0000/0.00 execution time (avg/stddev): 0.0000/0.00

    Read the article

  • Windows Azure Error: Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found

    - by DigiMortal
    When running some of your Windows Azure applications when storage emulator is not configured you may get the following error: "Windows Azure Tools: Failed to initialize Windows Azure storage emulator. Unable to start Development Storage. Failed to start Storage Emulator: the SQL Server instance ‘localhost\SQLExpress’ could not be found.   Please configure the SQL Server instance for Storage Emulator using the ‘DSInit’ utility in the Windows Azure SDK.". Here’s how to solve this problem. You need to run DSInit utility to create database. For Windows Azure SDK 1.6 the location for DSInit utility is: C:\Program Files\Windows Azure Emulator\emulator\devstore By default DSInit expects that your database server is (local)\SQLEXPRESS but you can change it easily. If you have MSSQL instance called SQLEXPRESS then it is enough to just run DSInit. If you need some other instance then run the following command: DSInit /sqlinstance:<instance name> For default instance use “.” as instance name: DSInit /sqlinstance:. You can find more information about sqlinstance and other switches from DSInit documentation. If database was correctly created you should see dialog like this: When storage database is ready you can run your application.

    Read the article

  • Custom metadata fields in Windows file manager (or other software)

    - by Dave Gaebler
    I'm trying to organize a collection of maybe 500 or so journal articles, stored in a combination of .pdf and .djvu formats. I'd like to be able to sort the collection by author(s), title, journal name, year, and subject keywords. Is there a way to create metadata fields for this information in the Windows file system (similar to how .mp3 files come with tags for album, title, track length, etc)? Or, if not, is there some software (preferably free) that can do something similar?

    Read the article

  • Extjs - Dynamically generate fields in a FormPanel

    - by Benjamin
    Hi all, I've got a script that generates a form panel: var form = new Ext.FormPanel({ id: 'form-exploit-zombie-'+zombie_ip, formId: 'form-exploit-zombie-'+zombie_ip, border: false, labelWidth: 75, formBind: true, defaultType: 'textfield', url: '/ui/modules/exploit/new', autoHeight: true, buttons:[{ text: 'Execute exploit', handler: function() { var form = Ext.getCmp('form-exploit-zombie-'+zombie_ip); form.getForm().submit({ waitMsg: 'Running exploit ...', success: function() { Ext.beef.msg('Yeh!', 'Exploit sent to the zombie.') }, failure: function() { Ext.beef.msg('Ehhh!', 'An error occured while trying to send the exploit.') } }); } }] }); that same scripts then retrieves a json file from my server which defines how many input fields that form should contain. The script then adds those fields to the form: Ext.each(inputs, function(input) { var input_name; var input_type = 'TextField'; var input_definition = new Array(); if(typeof input == 'string') { input_name = input; var field = new Ext.form.TextField({ id: 'form-zombie-'+zombie_ip+'-field-'+input_name, fieldLabel: input_name, name: 'txt_'+input_name, width: 175, allowBlank:false }); form.add(field); } else if(typeof input == 'object') { //input_name = array_key(input); for(definition in input) { if(typeof definition == 'string') { } } } else { return; } }); Finally, the form is added to the appropriate panel in my interface: panel.add(form); panel.doLayout(); The problem I have is: when I submit the form by clicking on the button, the http request sent to my server does not contain the fields added to the form. In other words, I'm not posting those fields to the server. Anyone knows why and how I could fix that? Thanks for your time.

    Read the article

  • WCF Datacontract, some fields do not deserialize

    - by jhersh
    Problem: I have a WCF service setup to be an endpoint for a call from an external system. The call is sending plain xml. I am testing the system by sending calls into the service from Fiddler using the RequestBuilder. The issue is that all of my fields are being deserialized with the exception of two fields. price_retail and price_wholesale. What am I missing? All of the other fields deserialize without an issue - the service responds. It is just these fields. XML Message: <widget_conclusion> <list_criteria_id>123</list_criteria_id> <list_type>consumer</list_type> <qty>500</qty> <price_retail>50.00</price_retail> <price_wholesale>40.00</price_wholesale> <session_id>123456789</session_id> </widget_conclusion> Service Method: public string WidgetConclusion(ConclusionMessage message) { var priceRetail = message.PriceRetail; } Message class: [DataContract(Name = "widget_conclusion", Namespace = "")] public class ConclusionMessage { [DataMember(Name = "list_criteria_id")] public int CriteriaId { get; set;} [DataMember(Name = "list_type")] public string ListType { get; set; } [DataMember(Name = "qty")] public int ListQuantity { get; set; } [DataMember(Name = "price_retail")] public decimal PriceRetail { get; set; } [DataMember(Name = "price_wholesale")] public decimal PriceWholesale { get; set; } [DataMember(Name = "session_id")] public string SessionId { get; set; } }

    Read the article

  • django-mptt fields showing up twice, breaking SQL

    - by Dominic Rodger
    I'm using django-mptt to manage a simple CMS, with a model called Page, which looks like this (most presumably irrelevant fields removed): class Page(mptt.Model, BaseModel): title = models.CharField(max_length = 20) slug = AutoSlugField(populate_from = 'title') contents = models.TextField() parent = models.ForeignKey('self', null=True, blank=True, related_name='children', help_text = u'The page this page lives under.') removed fields are called attachments, headline_image, nav_override, and published All works fine using SQLite, but when I use MySQL and try and add a Page using the admin (or using ModelForms and the save() method), I get this: ProgrammingError at /admin/mycms/page/add/ (1110, "Column 'level' specified twice") where the SQL generated is: 'INSERT INTO `kaleo_page` (`title`, `slug`, `contents`, `nav_override`, `parent_id`, `published`, `headline_image_id`, `lft`, `rght`, `tree_id`, `level`, `lft`, `rght`, `tree_id`, `level`) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)' for some reason I'm getting the django-mptt fields (lft, rght, tree_id and level) twice. It works in SQLite presumably because SQLite is more forgiving about what it accepts than MySQL. get_all_field_names() also shows them twice: >>> Page._meta.get_all_field_names() ['attachments', 'children', 'contents', 'headline_image', 'id', 'level', 'lft', 'nav_override', 'parent', 'published', 'rght', 'slug', 'title', 'tree_id'] Which is presumably why the SQL is bad. What could I have done that would result in those fields appearing twice in get_all_field_names()?

    Read the article

  • Django forms, inheritance and order of form fields

    - by Hannson
    I'm using Django forms in my website and would like to control the order of the fields. Here's how I define my forms: class edit_form(forms.Form): summary = forms.CharField() description = forms.CharField(widget=forms.TextArea) class create_form(edit_form): name = forms.CharField() The name is immutable and should only be listed when the entity is created. I use inheritance to add consistency and DRY principles. What happens which is not erroneous, in fact totally expected, is that the name field is listed last in the view/html but I'd like the name field to be on top of summary and description. I do realize that I could easily fix it by copying summary and description into create_form and loose the inheritance but I'd like to know if this is possible. Why? Imagine you've got 100 fields in edit_form and have to add 10 fields on the top in create_form - copying and maintaining the two forms wouldn't look so sexy then. (This is not my case, I'm just making up an example) So, how can I override this behavior? Edit: Apparently there's no proper way to do this without going through nasty hacks (fiddling with .field attribute). The .field attribute is a SortedDict (one of Django's internal datastructures) which doesn't provide any way to reorder key:value pairs. It does how-ever provide a way to insert items at a given index but that would move the items from the class members and into the constructor. This method would work, but make the code less readable. The only other way I see fit is to modify the framework itself which is less-than-optimal in most situations. In short the code would become something like this: class edit_form(forms.Form): summary = forms.CharField() description = forms.CharField(widget=forms.TextArea) class create_form(edit_form): def __init__(self,*args,**kwargs): forms.Form.__init__(self,*args,**kwargs) self.fields.insert(0,'name',forms.CharField()) That shut me up :)

    Read the article

  • Add fields to Django ModelForm that aren't in the model

    - by Cyclic
    I have a model that looks like: class MySchedule(models.Model): start_datetime=models.DateTimeField() name=models.CharField('Name',max_length=75) With it comes its ModelForm: class MyScheduleForm(forms.ModelForm): startdate=forms.DateField() starthour=forms.ChoiceField(choices=((6,"6am"),(7,"7am"),(8,"8am"),(9,"9am"),(10,"10am"),(11,"11am"), (12,"noon"),(13,"1pm"),(14,"2pm"),(15,"3pm"),(16,"4pm"),(17,"5pm"), (18,"6pm" startminute=forms.ChoiceField(choices=((0,":00"),(15,":15"),(30,":30"),(45,":45")))),(19,"7pm"),(20,"8pm"),(21,"9pm"),(22,"10pm"),(23,"11pm"))) class Meta: model=MySchedule def clean(self): starttime=time(int(self.cleaned_data.get('starthour')),int(self.cleaned_data.get('startminute'))) return self.cleaned_data try: self.instance.start_datetime=datetime.combine(self.cleaned_data.get("startdate"),starttime) except TypeError: raise forms.ValidationError("There's a problem with your start or end date") Basically, I'm trying to break the DateTime field in the model into 3 more easily usable form fields -- a date picker, an hour dropdown, and a minute dropdown. Then, once I've gotten the three inputs, I reassemble them into a DateTime and save it to the model. A few questions: 1) Is this totally the wrong way to go about doing it? I don't want to create fields in the model for hours, minutes, etc, since that's all basically just intermediary data, so I'd like a way to break the DateTime field into sub-fields. 2) The difficulty I'm running into is when the startdate field is blank -- it seems like it never gets checked for non-blankness, and just ends up throwing up a TypeError later when the program expects a date and gets None. Where does Django check for blank inputs, and raise the error that eventually goes back to the form? Is this my responsibility? If so, how do I do it, since it doesn't evaluate clean_startdate() since startdate isn't in the model. 3) Is there some better way to do this with inheritance? Perhaps inherit the MyScheduleForm in BetterScheduleForm and add the fields there? How would I do this? (I've been playing around with it for over an hours and can't seem to get it) Thanks! [Edit:] Left off the return self.cleaned_data -- lost it in the copy/paste originally

    Read the article

  • DB Design to store custom fields for a table

    - by Fazal
    Hi All, this question came up based on the responses I got for the question http://stackoverflow.com/questions/2785033/getting-wierd-issue-with-to-number-function-in-oracle As everyone suggested that storing Numeric values in VARCHAR2 columns is not a good practice (which I totally agree with), I am wondering about a basic Design choice our team has made and whether there are better way to design. Problem Statement : We Have many tables where we want to give certain number of custom fields. The number of required custom fields is known, but what kind of attribute is mapped to the column is available to the user E.g. I am putting down a hypothetical scenario below Say you have a laptop which stores 50 attribute values for every laptop record. Each laptop attributes are created by the some admin who creates the laptop. A user created a laptop product lets say lap1 with attributes String, String, numeric, numeric, String Second user created laptop lap2 with attributes String,numeric,String,String,numeric Currently there data in our design gets persisted as following Laptop Table Id Name field1 field2 field3 field4 field5 1 lap1 lappy lappy 12 13 lappy 2 lap2 lappy2 13 lappy2 lapp2 12 This example kind of simulates our requirement and our design Now here if somebody is lookinup records for lap2 table doing a comparison on field2, We need to apply TO_NUMBER. select * from laptop where name='lap2' and TO_NUMBER(field2) < 15 TO_NUMBER fails in some cases when query plan decides to first apply to_number instead of the other filter. QUESTION Is this a valid design? What are the other alternative ways to solve this problem One of our team mates suggested creating tables on the fly for such cases. Is that a good idea How do popular ORM tools give custom fields or flex fields handling? I hope I was able to make sense in the question. Sorry for such a long text.. This causes us to use TO_NUMBER when queryio

    Read the article

  • How slow are bit fields in C++

    - by Shane MacLaughlin
    I have a C++ application that includes a number of structures with manually controlled bit fields, something like #define FLAG1 0x0001 #define FLAG2 0x0002 #define FLAG3 0x0004 class MyClass { ' ' unsigned Flags; int IsFlag1Set() { return Flags & FLAG1; } void SetFlag1Set() { Flags |= FLAG1; } void ResetFlag1() { Flags &= 0xffffffff ^ FLAG1; } ' ' }; For obvious reasons I'd like to change this to use bit fields, something like class MyClass { ' ' struct Flags { unsigned Flag1:1; unsigned Flag2:1; unsigned Flag3:1; }; ' ' }; The one concern I have with making this switch is that I've come across a number of references on this site stating how slow bit fields are in C++. My assumption is that they are still faster than the manual code shown above, but is there any hard reference material covering the speed implications of using bit fields on various platforms, specifically 32bit and 64bit windows. The application deals with huge amounts of data in memory and must be both fast and memory efficient, which could well be why it was written this way in the first place.

    Read the article

  • Python: Parsing a colon delimited file with various counts of fields

    - by Mark
    I'm trying to parse a a few files with the following format in 'clientname'.txt hostname:comp1 time: Fri Jan 28 20:00:02 GMT 2011 ip:xxx.xxx.xx.xx fs:good:45 memory:bad:78 swap:good:34 Mail:good Each section is delimited by a : but where lines 0,2,6 have 2 fields... lines 1,3-5 have 3 or more fields. (A big issue I've had trouble with is the time: line, since 20:00:02 is really a time and not 3 separate fields. I have several files like this that I need to parse. There are many more lines in some of these files with multiple fields. ... for i in clients: if os.path.isfile(rpt_path + i + rpt_ext): # if the rpt exists then do this rpt = rpt_path + i + rpt_ext l_count = 0 for line in open(rpt, "r"): s_line = line.rstrip() part = s_line.split(':') print part l_count = l_count + 1 else: # else break break First I'm checking if the file exists first, if it does then open the file and parse it (eventually) As of now I'm just printing the output (print part) to make sure it's parsing right. Honestly, the only trouble I'm having at this point is the time: field. How can I treat that line specifically different than all the others? The time field is ALWAYS the 2nd line in all of my report files.

    Read the article

  • DHCP reply packets do not make it into KVM instance in OpenStack

    - by Lorin Hochstein
    I'm running a KVM instance inside of OpenStack, and it isn't getting an IP address from the DHCP server. Using tcpdump, I can see the request and reply packets on vnet0 of the compute host: # tcpdump -i vnet0 -n port 67 or port 68 tcpdump: WARNING: vnet0: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vnet0, link-type EN10MB (Ethernet), capture size 65535 bytes 19:44:56.176727 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:44:56.176785 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:44:56.177315 IP 10.40.0.1.67 > 10.40.0.3.68: BOOTP/DHCP, Reply, length 319 19:45:02.179834 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:45:02.179904 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:45:02.180375 IP 10.40.0.1.67 > 10.40.0.3.68: BOOTP/DHCP, Reply, length 319 However, if I do the same thing on eth0 inside the KVM instance, I only see the request packets, not the reply packets. What would prevent the packets from making it from vnet0 of the host to eth0 of the guest? My host is running Ubuntu 12.04 and my guest is running CentOS 6.3. Note that I have added this rule in my iptables, but it doesn't resolve the issue: -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill The instance corresponds to vnet0 and is connected via br100: # brctl show bridge name bridge id STP enabled interfaces br100 8000.54781a8605f2 no eth1 vnet0 vnet1 virbr0 8000.000000000000 yes Here's the full iptables-save: # Generated by iptables-save v1.4.12 on Tue Apr 2 19:47:27 2013 *nat :PREROUTING ACCEPT [8323:2553683] :INPUT ACCEPT [7993:2494942] :OUTPUT ACCEPT [6158:461050] :POSTROUTING ACCEPT [6455:511595] :nova-compute-OUTPUT - [0:0] :nova-compute-POSTROUTING - [0:0] :nova-compute-PREROUTING - [0:0] :nova-compute-float-snat - [0:0] :nova-compute-snat - [0:0] :nova-postrouting-bottom - [0:0] -A PREROUTING -j nova-compute-PREROUTING -A OUTPUT -j nova-compute-OUTPUT -A POSTROUTING -j nova-compute-POSTROUTING -A POSTROUTING -j nova-postrouting-bottom -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE -A nova-compute-snat -j nova-compute-float-snat -A nova-postrouting-bottom -j nova-compute-snat COMMIT # Completed on Tue Apr 2 19:47:27 2013 # Generated by iptables-save v1.4.12 on Tue Apr 2 19:47:27 2013 *mangle :PREROUTING ACCEPT [7969:5385812] :INPUT ACCEPT [7905:5363718] :FORWARD ACCEPT [158:48190] :OUTPUT ACCEPT [6877:8647975] :POSTROUTING ACCEPT [7035:8696165] -A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill COMMIT # Completed on Tue Apr 2 19:47:27 2013 # Generated by iptables-save v1.4.12 on Tue Apr 2 19:47:27 2013 *filter :INPUT ACCEPT [2196774:15856921923] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [2447201:1170227646] :nova-compute-FORWARD - [0:0] :nova-compute-INPUT - [0:0] :nova-compute-OUTPUT - [0:0] :nova-compute-inst-19 - [0:0] :nova-compute-inst-20 - [0:0] :nova-compute-local - [0:0] :nova-compute-provider - [0:0] :nova-compute-sg-fallback - [0:0] :nova-filter-top - [0:0] -A INPUT -j nova-compute-INPUT -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT -A FORWARD -j nova-filter-top -A FORWARD -j nova-compute-FORWARD -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT -A FORWARD -i virbr0 -o virbr0 -j ACCEPT -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -j nova-filter-top -A OUTPUT -j nova-compute-OUTPUT -A nova-compute-FORWARD -i br100 -j ACCEPT -A nova-compute-FORWARD -o br100 -j ACCEPT -A nova-compute-inst-19 -m state --state INVALID -j DROP -A nova-compute-inst-19 -m state --state RELATED,ESTABLISHED -j ACCEPT -A nova-compute-inst-19 -j nova-compute-provider -A nova-compute-inst-19 -s 10.40.0.1/32 -p udp -m udp --sport 67 --dport 68 -j ACCEPT -A nova-compute-inst-19 -s 10.40.0.0/16 -j ACCEPT -A nova-compute-inst-19 -p tcp -m tcp --dport 22 -j ACCEPT -A nova-compute-inst-19 -p icmp -j ACCEPT -A nova-compute-inst-19 -j nova-compute-sg-fallback -A nova-compute-inst-20 -m state --state INVALID -j DROP -A nova-compute-inst-20 -m state --state RELATED,ESTABLISHED -j ACCEPT -A nova-compute-inst-20 -j nova-compute-provider -A nova-compute-inst-20 -s 10.40.0.1/32 -p udp -m udp --sport 67 --dport 68 -j ACCEPT -A nova-compute-inst-20 -s 10.40.0.0/16 -j ACCEPT -A nova-compute-inst-20 -p tcp -m tcp --dport 22 -j ACCEPT -A nova-compute-inst-20 -p icmp -j ACCEPT -A nova-compute-inst-20 -j nova-compute-sg-fallback -A nova-compute-local -d 10.40.0.3/32 -j nova-compute-inst-19 -A nova-compute-local -d 10.40.0.4/32 -j nova-compute-inst-20 -A nova-compute-sg-fallback -j DROP -A nova-filter-top -j nova-compute-local COMMIT # Completed on Tue Apr 2 19:47:27 2013

    Read the article

  • Can't launch Oneiric x64 instance on Eucalyptus

    - by Bruno Reis
    EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. More details in the end. I didn't manage to fix it, and I will file a bug. EDIT 2: I managed to fix it, it apparently works. I have a 4-machine cluster running Ubuntu Server Natty (11.04) x64. I've installed "Ubuntu Enterprise Cloud" from the installtion CD (then updated it) on each of these machines. The cloud seems to work fine, I have lots of virtual machines running Natty servers on them. Now I'd like to run Oneiric in a virtual machine, but somehow I can't. I downloaded Oneiric's (x64) image from http://cloud-images.ubuntu.com/oneiric/current/, published it (uec-publish-tarball oneiric-server-cloudimg-amd64.tar.gz oneiric-server-cloudimg-amd64) exactly as I did with Natty, then tried to launch an instance (euca-run-instances -n 1 -k my-key -t m1.small -z my-cloud emi-XXXXXXXX) using Oneiric's image, but the instance is not able to boot. With euca-get-console-output I get the following: [ 0.461269] VFS: Cannot open root device "sda1" or unknown-block(0,0) [ 0.462388] Please append a correct "root=" boot option; here are the available partitions: [ 0.463855] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) [ 0.465331] Pid: 1, comm: swapper Not tainted 3.0.0-13-generic #22-Ubuntu [ 0.466526] Call Trace: [ 0.466989] [<ffffffff815d3ee5>] panic+0x91/0x194 [ 0.467860] [<ffffffff81ad1031>] mount_block_root+0xdc/0x18e [ 0.468891] [<ffffffff81ad126a>] mount_root+0x54/0x59 [ 0.469829] [<ffffffff81ad13dc>] prepare_namespace+0x16d/0x1a7 [ 0.470883] [<ffffffff81ad0d76>] kernel_init+0x140/0x145 [ 0.471837] [<ffffffff815f38e4>] kernel_thread_helper+0x4/0x10 [ 0.472889] [<ffffffff81ad0c36>] ? start_kernel+0x3df/0x3df [ 0.473884] [<ffffffff815f38e0>] ? gs_change+0x13/0x13 The filesystem is labeled "cloudimg-rootfs", inside the image both /etc/fstab and /boot/grub/grub.cfg always refer to the image by the label, everything seems to be correct, yet the kernel says it can't find the root file system. I've spent many hours googling, but nothing came out. I've asked on #ubuntu-server, but nobody knew what to do. I've asked on #eucalyptus but got no answer at all. Any ideas on why this is happening and how to solve it? Thanks EDIT: after many hours, I've found out that the problem has nothing to do with Eucalyptus. It looks like the image is buggy. Very, very buggy. The first problem is that the Kernel in the image is a -generic kernel, while I suppose it should be a -virtual one. I chrooted into the image, removed the -generic packages, replaced it with the -virtual ones. Then I extracted the new kernel (and replaced the original one (-generic) that came with the tarball) because I need it when I publish and launch an image with Eucalyptus. The problem described above was solved. But then, the console started showing this: mount: mount point ext4 does not exist If you check the /etc/fstab file in the image, it says: LABEL=cloudimg-rootfs ext4 defaults 0 1 Damnt, where's my mount point? Note that it is missing /proc as well. Well, when you think it is over, you will notice that your instance will have no network connectivity. Let's check /etc/network/interface: # interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback Oh my! It is missing eth0... here I stopped. I can't take no more. I give up. Looks like Canonical has just forgotten to properly set up this image. At first, I though: "have I downloaded a server image by mistake?", but no, I double checked. It is really the cloud image, it has even "cloud-init" installed (which is not, by default, on server images). They just forgot to prepare it. I will file a bug (and reference it here once this is done), and hope they fix it soon! EDIT 2: it looks like the network configuration was the last thing missing. I decided to test it with the fixes above, and it booted properly! However, I haven't got the slightest idea if the image is now good to go...

    Read the article

  • GitLab on a fresh Ubuntu 13 EC2 instance

    - by Polly
    I've spun up a fresh Amazon EC2 instance for a micro Ubuntu 13 server to be used as a GitLab server. I know the specs are a little low, but it should serve well for my purposes. It has an elastic (static) IP address that I have created an A record for git.mydomain.com. The first thing I did to the instance was add 1GB of swap to keep it happy from a memory perspective. I then set the hostname of the box to be git.mydomain.com and followed https://github.com/gitlabhq/gitlabhq/blob/6-2-stable/doc/install/installation.md to the letter. Everything seems to have worked, except for the web server side of things. Doing a gitlab:check shows the following: Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.4 ? ... OK (1.7.4) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... can't check, you have no projects Running /home/git/gitlab-shell/bin/check Check GitLab API access: /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `initialize': Connection refused - connect(2) (Errno::ECONNREFUSED) from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `open' from /usr/local/lib/ruby/2.0.0/net/http.rb:878:in `block in connect' from /usr/local/lib/ruby/2.0.0/timeout.rb:52:in `timeout' from /usr/local/lib/ruby/2.0.0/net/http.rb:877:in `connect' from /usr/local/lib/ruby/2.0.0/net/http.rb:862:in `do_start' from /usr/local/lib/ruby/2.0.0/net/http.rb:851:in `start' from /home/git/gitlab-shell/lib/gitlab_net.rb:62:in `get' from /home/git/gitlab-shell/lib/gitlab_net.rb:29:in `check' from /home/git/gitlab-shell/bin/check:11:in `<main>' gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... can't check, you have no projects Projects have satellites? ... can't check, you have no projects Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.3) Checking GitLab ... Finished It seems like I'm very nearly there. Searching on this error I have only found advice that unfortunately hasn't helped. I'm not using any kind of SSL setup, which a lot of the posts I found were about. I have tried appending 127.0.0.1 git.mydomain.com to /etc/hosts and giving the instance a reboot but there was no change. My config/gitlab.yml file has host: git.mydomain.com in it, and my gitlab-shell/config.yml has gitlab_url: "http://git.mydomain.com/" in it. I'm sure I'm missing something simple, but I've been through every relevant link I can find and have had no positive results; thank you in advance for any help!

    Read the article

  • PDF Manipulation with Adobe's Form Input Fields

    - by Justin
    Hello, I am trying to simplify a process where I currently use my hand calculations of X & Y Co-Ordinates of each value. Which works fine, but is causing me a lot of pain as I have to do quite a number of PDF's. I know that I can open a PDF and insert "input fields" within Adobe Acrobat Pro, which it would be great if I could use PHP to connect to those input fields and insert a value from a PHP Form. WORKFLOW:: PHP FORM PHP PROCESSING ENGINE TO FINAL PDF WITH FORM VALUES IN THE LOCATION OF THE ADOBE INPUT FIELDS. If someone has some information on something like this it would be much appreciated.

    Read the article

  • Add a Session Variable or Custom field to the Elmah Error Log table

    - by VJ
    I want to add my own session variable to elmah error log table and display it. I already have modified the source code and added the new fields to Error.cs and other fields but I don't know but when I assign an HttpContext.Current.Session["MyVar"].tostring() value to my field in the constructor it stops logging exceptions and does not log any exception. I just need to get the value of the session variable is there other way for this. I read a post which he added fields for the email but it does not say where exactly should I get the session value.

    Read the article

  • Django South Foreign Keys referring to pks with Custom Fields

    - by Rory Hart
    I'm working with a legacy database which uses the MySQL big int so I setup a simple custom model field to handle this: class BigAutoField(models.AutoField): def get_internal_type(self): return "BigAutoField" def db_type(self): return 'bigint AUTO_INCREMENT' # Note this won't work with Oracle. This works fine with django south for the id/pk fields (mysql desc "| id | bigint(20) | NO | PRI | NULL | auto_increment |") but the ForeignKey fields in other models the referring fields are created as int(11) rather than bigint(20). I assume I have to add an introspection rule to the BigAutoField but there doesn't seem to be a mention of this sort of rule in the documentation (http://south.aeracode.org/docs/customfields.html). Update: Currently using Django 1.1.1 and South 0.6.2

    Read the article

  • Limit the file inputs cloned in a form with Jquery

    - by Philip
    Hi, i use this Jquery function to clone the file input fields in my form, $(function() { var scntDiv = $('#clone'); var i = $('#clone p').size() + 1; $('#addImg').live('click', function() { $('<p><label for="attach"><input type="file" name="attachment_'+ i +'" /> <a href="#" id="remImg">Remove</a></label></p>').appendTo(scntDiv); i++; return false; }); $('#remImg').live('click', function() { if( i > 2 ) { $(this).parents('p').remove(); i--; } return false; }); }); is it possible to limit the fields that can be cloned? lets say a number of 4 fields? thanks a lot, Philip

    Read the article

  • Best way to change Satchmo checkout page fields?

    - by konrad
    For a Satchmo project we have to change the fields a customer has to fill out during checkout. Specifically, we have to: Add a 'middle name' field Replace the bill and delivery addressee with separate first, middle and last name fields Replace the two address lines with street, number and number extension These fields are expected by an upstream web service, so we need to store this data separately. What's the best way to achieve this with minimal changes in the rest of Satchmo? We prefer a solution in which we do not have to change the Satchmo code itself, but if required we can fork it.

    Read the article

  • Validating only selected fields using ASP.NET MVC 2 and Data Annotations

    - by thinknow
    I'm using Data Annotations with ASP.NET MVC 2 as demonstrated in this post: http://weblogs.asp.net/scottgu/archive/2010/01/15/asp-net-mvc-2-model-validation.aspx Everything works fine when creating / updating an entity where all required property values are specified in the form and valid. However, what if I only want to update some of the fields? For example, let's say I have an Account entity with 20 fields, but I only want to update Username and Password? ModelState.IsValid validates against all the properties, regardless of whether they are referenced in the submitted form. How can I get it to validate only the fields that are referenced in the form?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >