Search Results

Search found 15137 results on 606 pages for 'global state'.

Page 98/606 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • How to implement string matching based on a pattern

    - by Vincent Rischmann
    I was asked to build a tool that can identify if a string match a pattern. Example: {1:20} stuff t(x) {a,b,c} would match: 1 stuff tx a 20 stuff t c It is a sort of regex but with a different syntax Parentheses indicate an optional value {1:20} is a interval; I will have to check if the token is a number and if it is between 1 and 20 {a,b,c} is just an enumeration; it can be either a or b or c Right now I implemented this with a regex, and the interval stuff was a pain to do. On my own time I tried implementing some kind of matcher by hand, but it turns out it's not that easy to do. By experimenting I ended up with a function that generates a state table from the pattern and a state machine. It worked well until I tried to implement the optional value, and I got stuck and how to generate the state table. After that I searched how I could do this, and that led me to stuff like LL parser, LALR parser, recursive-descent parser, context-free grammars, etc. I never studied any of this so it's hard to know what is relevant here, but I think this is what I need: A grammar A parser which generates states from the grammar and a pattern A state machine to see if a string match the states So my first question is: Is this right ? And second question, what do you recommend I read/study to be able to implement this ?

    Read the article

  • Using Subdomains for Newly Regional Company

    - by Taylord22
    The company I work for is expanding their business to new territories. I've got a lot of stabilization to do in the region/state where we're one of the most well known companies of our kind. Currently, we have 3 distinct product lines which are currently distinguished by 3 separate URLS. This is affecting the user flow of our site, so we'd like to clean it up before launching our products into the various regions. The business has decided to grow into 5 new states (one state consisting of one county only) — none of which will feature all 3 products. Our homebase state is the only one that will have all 3 products this year. My initial thought was to use subdomains to separate out the regions, that way we could use a canonical tag to stabilize the root domain (which would feature home state content, and support content for all regions), and remove us from potential duplicate content penalization. Our product content will be nearly identical across the regions for the first year. I second guessed myself by thinking that it was perhaps better to use a "[product].root/region" URL instead. And I'm currently stuck by wondering if it was not better to build out subdomains for products and regions...using one modifier or the other as a funnel/branding page into the other. For instance, user lands on "region.root.com" and sees exactly what products we offer in that region. Basically, a tailored landing page. Meanwhile the bulk of the product content would actually live under "product.root.com/region/page". My head is spinning. And while searching for similar questions I also bumped into reference of another tag meant to be used in some similar cases to mine. I feel like there's a lot of risks involved in this subdomain strategy, but I also can't help but see the benefits in the user flow.

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • Incremental Statistics Maintenance – what statistics will be gathered after DML occurs on the table?

    - by Maria Colgan
    Incremental statistics maintenance was introduced in Oracle Database 11g to improve the performance of gathering statistics on large partitioned table. When incremental statistics maintenance is enabled for a partitioned table, oracle accurately generated global level  statistics by aggregating partition level statistics. As more people begin to adopt this functionality we have gotten more questions around how they expected incremental statistics to behave in a given scenario. For example, last week we got a question around what partitions should have statistics gathered on them after DML has occurred on the table? The person who asked the question assumed that statistics would only be gathered on partitions that had stale statistics (10% of the rows in the partition had changed). However, what they actually saw when they did a DBMS_STATS.GATHER_TABLE_STATS was all of the partitions that had been affected by the DML had statistics re-gathered on them. This is the expected behavior, incremental statistics maintenance is suppose to yield the same statistics as gathering table statistics from scratch, just faster. This means incremental statistics maintenance needs to gather statistics on any partition that will change the global or table level statistics. For instance, the min or max value for a column could change after just one row is inserted or updated in the table. It might easier to demonstrate this using an example. Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table. After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12. And we now have the following column statistics for the ORDERS2 table. We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table. So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month. After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed. Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated. The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update. So, incremental statistics maintenance will gather statistics on any partition, whose data has changed and that change will impact the global level statistics.

    Read the article

  • MDM 2010 Summit in San Francisco

    - by Tony Ouk
    Since 2006, the MDM Global Summit Series has brought master data expertise to more than 5,000 delegates worldwide. The Series is designed to reinforce the importance of data governance as a key factor to your MDM program's success while providing real-world experience and all-in-one access to solutions providers. Come join us June 2-3, 2010 at the Hyatt Regency in San Francisco.  For more information including registration details, visit the MDM Global Summit Series website.

    Read the article

  • Running Solaris 11 as a control domain on a T2000

    - by jsavit
    There is increased adoption of Oracle Solaris 11, and many customers are deploying it on systems that previously ran Solaris 10. That includes older T1-processor based systems like T1000 and T2000. Even though they are old (from 2005) and don't have the performance of current SPARC servers, they are still functional, stable servers that customers continue to operate. One reason to install Solaris 11 on them is that older machines are attractive for testing OS upgrades before updating current, production systems. Normally this does not present a challenge, because Solaris 11 runs on any T-series or M-series SPARC server. One scenario adds a complication: running Solaris 11 in a control domain on a T1000 or T2000 hosting logical domains. Solaris 11 pre-installed Oracle VM Server for SPARC incompatible with T1 Unlike Solaris 10, Solaris 11 comes with Oracle VM Server for SPARC preinstalled. The ldomsmanager package contains the logical domains manager for Oracle VM Server for SPARC 2.2, which requires a SPARC T2, T2+, T3, or T4 server. It does not work with T1-processor systems, which are only supported by LDoms Manager 1.2 and earlier. The following screenshot shows what happens (bold font) if you try to use Oracle VM Server for SPARC 2.x commands in a Solaris 11 control domain. The commands were issued in a control domain on a T2000 that previously ran Solaris 10. We also display the version of the logical domains manager installed in Solaris 11: root@t2000 psrinfo -vp The physical processor has 4 virtual processors (0-3) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep T SUNW,Sun-Fire-T200 # ldm -V Failed to connect to logical domain manager: Connection refused # pkg info ldomsmanager Name: system/ldoms/ldomsmanager Summary: Logical Domains Manager Description: LDoms Manager - Virtualization for SPARC T-Series Category: System/Virtualization State: Installed Publisher: solaris Version: 2.2.0.0 Build Release: 5.11 Branch: 0.175.0.8.0.3.0 Packaging Date: May 25, 2012 10:20:48 PM Size: 2.86 MB FMRI: pkg://solaris/system/ldoms/[email protected],5.11-0.175.0.8.0.3.0:20120525T222048Z The 2.2 version of the logical domains manager will have to be removed, and 1.2 installed, in order to use this as a control domain. Preparing to change - create a new boot environment Before doing anything else, lets create a new boot environment: # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 2.14G static 2012-09-25 10:32 # beadm create solaris-1 # beadm activate solaris-1 # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris N / 4.82M static 2012-09-25 10:32 solaris-1 R - 2.14G static 2012-09-29 11:40 # init 0 Normally an init 6 to reboot would have been sufficient, but in the next step I reset the system anyway in order to put the system in factory default mode for a "clean" domain configuration. Preparing to change - reset to factory default There was a leftover domain configuration on the T2000, so I reset it to the factory install state. Since the ldm command is't working yet, it can't be done from the control domain, so I did it by logging onto to the service processor: $ ssh -X admin@t2000-sc Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. Oracle Advanced Lights Out Manager CMT v1.7.9 Please login: admin Please Enter password: ******** sc> showhost Sun-Fire-T2000 System Firmware 6.7.10 2010/07/14 16:35 Host flash versions: OBP 4.30.4.b 2010/07/09 13:48 Hypervisor 1.7.3.c 2010/07/09 15:14 POST 4.30.4.b 2010/07/09 14:24 sc> bootmode config="factory-default" sc> poweroff Are you sure you want to power off the system [y/n]? y SC Alert: SC Request to Power Off Host. SC Alert: Host system has shut down. sc> poweron SC Alert: Host System has Reset At this point I rebooted into the new Solaris 11 boot environment, and Solaris commands showed it was running on the factory default configuration of a single domain owning all 32 CPUs and 32GB of RAM (that's what it looked like in 2005.) # psrinfo -vp The physical processor has 8 cores and 32 virtual processors (0-31) The core has 4 virtual processors (0-3) The core has 4 virtual processors (4-7) The core has 4 virtual processors (8-11) The core has 4 virtual processors (12-15) The core has 4 virtual processors (16-19) The core has 4 virtual processors (20-23) The core has 4 virtual processors (24-27) The core has 4 virtual processors (28-31) UltraSPARC-T1 (chipid 0, clock 1200 MHz) # prtconf|grep Mem Memory size: 32640 Megabytes Note that the older processor has 4 virtual CPUs per core, while current processors have 8 per core. Remove ldomsmanager 2.2 and install the 1.2 version The Solaris 11 pkg command is now used to remove the 2.2 version that shipped with Solaris 11: # pkg uninstall ldomsmanager Packages to remove: 1 Create boot environment: No Create backup boot environment: No Services to change: 2 PHASE ACTIONS Removal Phase 130/130 PHASE ITEMS Package State Update Phase 1/1 Package Cache Update Phase 1/1 Image State Update Phase 2/2 Finally, LDoms 1.2 installed via its install script, the same way it was done years ago: # unzip LDoms-1_2-Integration-10.zip # cd LDoms-1_2-Integration-10/Install/ # ./install-ldm Welcome to the LDoms installer. You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. ... ... normal install messages omitted ... The Solaris Security Toolkit applies to Solaris 10, and cannot be used in Solaris 11 (in which several things hardened by the Toolkit are already hardened by default), so answer b in the choice below: You are about to install the Logical Domains Manager package that will enable you to create, destroy and control other domains on your system. Given the capabilities of the LDoms domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit. Select a security profile from this list: a) Hardened Solaris configuration for LDoms (recommended) b) Standard Solaris configuration c) Your custom-defined Solaris security configuration profile Enter a, b, or c [a]: b ... other install messages omitted for brevity... After install I ensure that the necessary services are enabled, and verify the version of the installed LDoms Manager: # svcs ldmd STATE STIME FMRI online 22:00:36 svc:/ldoms/ldmd:default # svcs vntsd STATE STIME FMRI disabled Aug_19 svc:/ldoms/vntsd:default # ldm -V Logical Domain Manager (v 1.2-debug) Hypervisor control protocol v 1.3 Using Hypervisor MD v 1.1 System PROM: Hypervisor v. 1.7.3. @(#)Hypervisor 1.7.3.c 2010/07/09 15:14\015 OpenBoot v. 4.30.4. @(#)OBP 4.30.4.b 2010/07/09 13:48 Set up control domain and domain services At this point we have a functioning LDoms 1.2 environment that can be configured in the usual fashion. One difference is that LDoms 1.2 behavior had 'delayed configuration mode (as expected) during initial configuration before rebooting the control domain. Another minor difference with a Solaris 11 control domain is that you define virtual switches using the 'vanity name' of the network interface, rather than the hardware driver name as in Solaris 10. # ldm list ------------------------------------------------------------------------------ Notice: the LDom Manager is running in configuration mode. Configuration and resource information is displayed for the configuration under construction; not the current active configuration. The configuration being constructed will only take effect after it is downloaded to the system controller and the host is reset. ------------------------------------------------------------------------------ NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- SP 32 32640M 3.2% 4d 2h 50m # ldm add-vdiskserver primary-vds0 primary # ldm add-vconscon port-range=5000-5100 primary-vcc0 primary # ldm add-vswitch net-dev=net0 primary-vsw0 primary # ldm set-mau 2 primary # ldm set-vcpu 8 primary # ldm set-memory 4g primary # ldm add-config initial # ldm list-spconfig factory-default initial [current] That's it, really. After reboot, we are ready to install guest domains. Summary - new wine in old bottles This example shows that (new) Solaris 11 can be installed on (old) T2000 servers and used as a control domain. The main activity is to remove the preinstalled Oracle VM Server for 2.2 and install Logical Domains 1.2 - the last version of LDoms to support T1-processor systems. I tested Solaris 10 and Solaris 11 guest domains running on this server and they worked without any surprises. This is a viable way to get further into Solaris 11 adoption, even on older T-series equipment.

    Read the article

  • Mozilla Thunderbird

    - by sadik khan
    I am a frequent user of Ubuntu and recently upgraded from Lucid to Ubuntu 11.10. I was not able to properly configure Thunderbird, so I switched to Evolution. First of all what I want is smooth way to configure Thunderbird with all features enabled, like global address list and calendar setting. I also want to know how to remove Thunderbird from global appmenu email icon, and how to insert Evolution email icon in its place. Thanks Sadik khan

    Read the article

  • Why a graduate program in South Africa?

    - by anca.rosu
    South Africa, like many other countries, is desperate for skills. Good, solid, technical skills – together with a get-up-and-go attitude – and the desire to work for a world-class organization that is leading the way! In addition, we have made a commitment in South Africa that we need to transform our organization and develop and empower Black individuals who historically have not had the opportunity to participate in the global economy. It is through this investment in our country's people that we contribute to the development of a nation capable of competing on the global stage. This makes for an exciting recipe! We have: Plenty of young and talented individuals who are eager to get stuck in and learn. Formal, recognized qualifications that form the basis for further development. A huge big global organization – Oracle – that is committed to developing these graduates and giving them an opportunity that is out of this world! Mix the above ‘ingredients’ together Tackle and remove potential “lumps & bumps” along the way as we learn and grow together Nurture and care for each other in a warm but tough environment What have we achieved? In most cases, the outcome is an awesome bunch of new talent that is well equipped to face the IT world. Where we have the opportunity and suitable headcount available to employ these graduates at Oracle we snap them up – alternatively our business partners and customers are always eager to recruit Oracle graduates into their organizations! These individuals go through real-life work place experience whilst at Oracle. In some cases they get to travel internationally. The excitement and buzz gets into their system and their blood becomes truly RED! Oracle RED! This is valuable talent and expertise to have in our eco-system and it’s an exciting program to be a part of not only as a graduate but as an Oracle employee too!   If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com. Technorati Tags: South Africa,technical skills,graduate program,opportunity,global organization,new talent

    Read the article

  • Wifi hotspot disconnected after some time

    - by Rohit Bansal
    I am trying to use my Ubuntu system as Wifi Hotspot, but for some reason Hotspot get disconnected on its own. Searching for the solution, I found this help : Why is my ethernet connection connecting and disconnecting repeatedly? Reading through the above article I used the following command sudo killall dnsmasq as a result I manage to establish hotspot for around 5-10 sec before getting disconnected as against immediately.... Here's the system log (in case needed) tail -f /var/log/syslog : Apr 1 23:31:42 NetworkManager[901]: <info> Starting dnsmasq... Apr 1 23:31:42 NetworkManager[901]: <info> (wlan0): device state change: ip-config -> activated (reason 'none') [70 100 0] Apr 1 23:31:42 dnsmasq[4159]: started, version 2.57 cachesize 150 Apr 1 23:31:42 dnsmasq[4159]: compile time options: IPv6 GNU-getopt DBus I18N DHCP TFTP IDN Apr 1 23:31:42 dnsmasq-dhcp[4159]: DHCP, IP range 10.42.43.10 -- 10.42.43.100, lease time 1h Apr 1 23:31:42 dnsmasq[4159]: reading /etc/resolv.conf Apr 1 23:31:42 dnsmasq[4159]: using nameserver 220.226.6.104#53 Apr 1 23:31:42 dnsmasq[4159]: using nameserver 220.226.100.40#53 Apr 1 23:31:42 dnsmasq[4159]: cleared cache Apr 1 23:31:42 NetworkManager[901]: <info> Activation (wlan0) successful, device activated. Apr 1 23:31:42 NetworkManager[901]: <info> Activation (wlan0) Stage 5 of 5 (IP Configure Commit) complete. Apr 1 23:31:42 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP4 Configure Get) complete. Apr 1 23:31:42 dbus[885]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Apr 1 23:31:42 dbus[885]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Connection established at this point....now disconnecting after 10 sec... Apr 1 23:31:52 ntpdate[4194]: adjust time server 91.189.94.4 offset -0.011589 sec Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): IP6 addrconf timed out or failed. Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP6 Configure Timeout) scheduled... Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP6 Configure Timeout) started... Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 5 of 5 (IP Configure Commit) started... Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --out-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --out-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --source 10.42.43.0/255.255.255.0 --in-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --destination 10.42.43.0/255.255.255.0 --out-interface wlan0 --match state --state ESTABLISHED,RELATED --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table nat --insert POSTROUTING --source 10.42.43.0/255.255.255.0 ! --destination 10.42.43.0/255.255.255.0 --jump MASQUERADE Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 53 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol tcp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert INPUT --in-interface wlan0 --protocol udp --destination-port 67 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --out-interface wlan0 --jump REJECT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --in-interface wlan0 --out-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --source 10.42.43.0/255.255.255.0 --in-interface wlan0 --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table filter --insert FORWARD --destination 10.42.43.0/255.255.255.0 --out-interface wlan0 --match state --state ESTABLISHED,RELATED --jump ACCEPT Apr 1 23:32:01 NetworkManager[901]: <info> Executing: /sbin/iptables --table nat --insert POSTROUTING --source 10.42.43.0/255.255.255.0 ! --destination 10.42.43.0/255.255.255.0 --jump MASQUERADE Apr 1 23:32:01 NetworkManager[901]: <info> Starting dnsmasq... Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 5 of 5 (IP Configure Commit) complete. Apr 1 23:32:01 NetworkManager[901]: <info> Activation (wlan0) Stage 4 of 5 (IP6 Configure Timeout) complete. Apr 1 23:32:01 NetworkManager[901]: <warn> dnsmasq died with signal 9 Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): device state change: activated -> failed (reason 'sharing-start-failed') [100 120 18] Apr 1 23:32:01 dnsmasq[4235]: started, version 2.57 cachesize 150 Apr 1 23:32:01 dnsmasq[4235]: compile time options: IPv6 GNU-getopt DBus I18N DHCP TFTP IDN Apr 1 23:32:01 dnsmasq-dhcp[4235]: DHCP, IP range 10.42.43.10 -- 10.42.43.100, lease time 1h Apr 1 23:32:01 NetworkManager[901]: <warn> Activation (wlan0) failed for access point (Reppify Ubuntu) Apr 1 23:32:01 dnsmasq[4235]: reading /etc/resolv.conf Apr 1 23:32:01 dnsmasq[4235]: using nameserver 220.226.6.104#53 Apr 1 23:32:01 dnsmasq[4235]: using nameserver 220.226.100.40#53 Apr 1 23:32:01 dnsmasq[4235]: cleared cache Apr 1 23:32:01 NetworkManager[901]: <warn> Activation (wlan0) failed. Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): device state change: failed -> disconnected (reason 'none') [120 30 0] Apr 1 23:32:01 NetworkManager[901]: <info> (wlan0): deactivating device (reason 'none') [0] Apr 1 23:32:01 dbus[885]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Apr 1 23:32:01 dbus[885]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Apr 1 23:32:01 NetworkManager[901]: <error> [1333303321.565351] [nm-device-wifi.c:1815] nm_device_wifi_set_mode(): (wlan0): error setting mode 2

    Read the article

  • Giving a Zone "More Power"

    - by Brian Leonard
    In addition to the traditional virtualization benefits that Solaris zones offer, applications running in zones are also running in a more secure environment. One way to quantify this is compare the privileges available to the global zone with those of a local zone. For example, there a 82 distinct privileges available to the global zone: bleonard@solaris:~$ ppriv -l | wc -l 82 You can view the descriptions for each of those privileges as follows: bleonard@solaris:~$ ppriv -lv contract_event Allows a process to request critical events without limitation. Allows a process to request reliable delivery of all events on any event queue. contract_identity Allows a process to set the service FMRI value of a process contract template. ... Or for just one or more privileges: bleonard@solaris:~$ ppriv -lv file_dac_read file_dac_write file_dac_read Allows a process to read a file or directory whose permission bits or ACL do not allow the process read permission. file_dac_write Allows a process to write a file or directory whose permission bits or ACL do not allow the process write permission. In order to write files owned by uid 0 in the absence of an effective uid of 0 ALL privileges are required. However, in a non-global zone, only 43 of the 83 privileges are available by default: root@myzone:~# ppriv -l zone | wc -l 43 The missing privileges are: cpc_cpu dtrace_kernel dtrace_proc dtrace_user file_downgrade_sl file_flag_set file_upgrade_sl graphics_access graphics_map net_mac_implicit proc_clock_highres proc_priocntl proc_zone sys_config sys_devices sys_ipc_config sys_linkdir sys_dl_config sys_net_config sys_res_bind sys_res_config sys_smb sys_suser_compat sys_time sys_trans_label virt_manage win_colormap win_config win_dac_read win_dac_write win_devices win_dga win_downgrade_sl win_fontpath win_mac_read win_mac_write win_selection win_upgrade_sl xvm_control However, just like Tim Taylor, it is possible to give your zones more power. For example, a zone by default doesn't have the privileges to support DTrace: root@myzone:~# dtrace -l ID PROVIDER MODULE FUNCTION NAME The DTrace privileges can be added, however, as follows: bleonard@solaris:~$ sudo zonecfg -z myzone Password: zonecfg:myzone> set limitpriv="default,dtrace_proc,dtrace_user" zonecfg:myzone> verify zonecfg:myzone> exit bleonard@solaris:~$ sudo zoneadm -z myzone reboot Now I can run DTrace from within the zone: root@myzone:~# dtrace -l | more ID PROVIDER MODULE FUNCTION NAME 1 dtrace BEGIN 2 dtrace END 3 dtrace ERROR 7115 syscall nosys entry 7116 syscall nosys return ... Note, certain privileges are never allowed to be assigned to a zone. You'll be notified on boot if you attempt to assign a prohibited privilege to a zone: bleonard@solaris:~$ sudo zoneadm -z myzone reboot privilege "dtrace_kernel" is not permitted within the zone's privilege set zoneadm: zone myzone failed to verify Here's a nice listing of all the privileges and their zone status (default, optional, prohibited): Privileges in a Non-Global Zone.

    Read the article

  • Probation is Over: PASS Board Year 1, Q2

    - by Denise McInerney
    Though it's not always official every job begins with a probation period. You start out with lots of questions and every day you find out how much more you have to learn. Usually after a few months you discover that you can actually answer some questions and have at least an idea of what you are supposed to be doing. Now at the end of my second quarter on the "job" of serving on the PASS Board I have reached that point. My probation period is over. The last three months were busy for the entire Board with the budget process, an in-person meeting and moving forward with PASS Global Growth plans. I had also set a specific goal for myself for my 2nd quarter: to see the Board to adopt a Code of Conduct for the PASS Summit. Code of Conduct When I ran for the Board I included my desire to see PASS establish a code of conduct in my campaign platform.  I was motivated to do this for a few reasons. Other technical conferences have had incidents of harassment. Most of these did not have a policy in place prior to having a problem, though several conference organizers have since adopted anti-harassment policies or codes of conduct. I felt it would be in PASS' interest to establish a policy so we would be prepared should there be an incident.   "This is Community" Adopting a code of conduct would reinforce our community orientation and send a message about the positive character of the Summit. PASS is a leader among technical organizations for its promotion and support of women. Adopting a code of conduct would further demonstrate our leadership in this area. After researching similar polices from other organizations I published a first draft in April. I solicited feedback from the Board, HQ staff and some PASS members. Incorporating that feedback I presented version 4 at the May Board meeting, where we had a good discussion. You can read the meeting minutes for details. I incorporated points from  the Board discussion as well as feedback from a legal review to produce a final version which has been submitted to the Board. It will be discussed at the Board meeting July 12. You can read the full text at the end of this post. Virtual Chapters In the first quarter we started ramping up marketing support for the Virtual Chapters. Since then each edition of the Connector has highlighted a different VC to help get out the message about the variety of eductional opporutnities that are offered. These VC profiles will continue in the coming months. I was very pleased to welcome the new DBA Fundamentals VC which is geared toward new DBAs, people who are considering entering the field and those transitioning from a different IT role. Thanks to the contributions of Erin Stellato, Michelle Nalliah and Karla Landrum we published a "Virtual Chapter Guidebook". This document includes great advice on how to build and promote a VC. It's also a reference for how things work, from budgets to webinar hosting. I think this document will be extremely valuable to all our VC leaders and am grateful to those who put it together. Board Meeting/SQL Rally The Board met in May in Dallas. Among the items discussed were Global Growth, the budget, future events and the upcoming elections. We covered a lot of ground in two days and I will again refer you to the meeting minutes for details. The meeting schedule allowed us to participate in the SQL Rally networking events and one full day of the conference. I enjoyed having the opportunity to meet and talk with many PASS members. And my hat is off to the SQL Rally organizers who put on an outstanding event. Global Growth PASS has undertaken a major intitiative to reach and engage SQL Server professionals around the world. This Global Growth plan is ambitious and will have a significant impact on the strategic direction of the organization. We have been reaching out to the community for feedback, including hosting Twitter chats and live Town Hall meetings. I co-hosted two of these events and appreciated hearing the different perspectives of the people who participated If you have not done so I encourage you to read about the Global Growth vision and proposed governance changes  and submit your feedback. FY13 Budget July 1 is the beginning of PASS' fiscal year, which makes the end of June the deadline for approving a budget. Each director submits a budget for his or her portfolio. For the Virtual Chapter portfolio I focused on how we can allocate resources to grow the VCs. Budgeting is a give-and-take process, and while I didn't get everything I asked for I'm pleased the FY13 budget includes a significant increase in financial support for the Virtual Chapters. Many people put a lot of work into the budget, but no two people deserve credit more than VP of Finance Douglas McDowell and Accounting Manager Sandy Cherry. Thanks to both of them for getting us across the goal line on time. SQL Saturday I attended SQL Saturdays in Orange Co. CA and Phoenix. It's always inspiring to see the enthusiasm in the community for learning and networking. These events are successful due to the hard work of many volunteers. Thanks to the organizers in both cities for all your efforts. Next Up This quarter we'll be gearing up plans for the VCs at the Summit and exploring ways the VCs can best support PASS' Global Growth work. I'll also be wrapping up work on the Code of Conduct and attending a Board meeting in September. And I will be at SQL Saturday #144 in Sacramento later this month. Here is the language of the Code of Conduct I have submitted to the Board for consideration: PASS Code of Conduct The PASS Summit provides database professionals from a variety of backgrounds with an opportunity to connect, share and learn.  We value the strong sense of community that characterizes this event and we seek to foster an inclusive, professional atmosphere. We are dedicated to providing a harassment-free conference experience for everyone, regardless of gender, race, sexual orientation, disability, physical appearance, religion or any other protected classification.  Everyone at the Summit is expected to follow the Code of Conduct. This includes but is not limited to: PASS Staff, Exhibitors, Speakers, Attendees and anyone affiliated with the event. Participants are expected to follow the Code of Conduct at all Summit events, including PASS-sponsored social events. Participant behavior Harassment includes, but is not limited to, offensive verbal comments related to gender, race, sexual orientation, disability, physical appearance, religion, or any other protected classification.  Intimidation, threats, stalking, harassing photography or recording, sustained disruption of talks or other events, inappropriate physical contact and unwelcome attention will also be considered harassment. Similarly, sexual, racist, derogatory, threatening or other inappropriate language and imagery are not appropriate for any conference venue, including sessions.  Recourse If a participant engages in any conduct that is prohibited under this Code of Conduct, the conference organizers may take any action they deem appropriate, including warning the offender or expelling the offender from the conference. No refunds will be granted to attendees expelled from the Summit due to violations of the Code of Conduct. If you are being harassed, witness harassment, or have any other concerns, please contact a member of conference staff immediately. Conference staff can be identified by their “Headquarters/Staff” shirts and are trained to handle the situation appropriately. A Code of Conduct Committee (CCC) made up of the Executive Manager and three members of the Board of Directors designated by the President will be authorized to take action in response to an incident or behavior that violates the Code of Conduct.

    Read the article

  • Information Driven Value Chains: Achieving Supply Chain Excellence in the 21st Century With Oracle -

    World-class supply chains can help companies achieve top line and bottom line results in today’s complex,global world.Tune into this conversation with Rick Jewell,SVP,Oracle Supply Chain Development,to hear about Oracle’s vision for world class SCM,and the latest and greatest on Oracle Supply Chain Management solutions.You will learn about Oracle’s complete,best-in-class,open and integrated solutions,which are helping companies drive profitability,achieve operational excellence,streamline innovation,and manage risk and compliance in today’s complex,global world.

    Read the article

  • Firefox Slow down and 100% CPU after gnome-settings-deamon update

    - by digitaljail
    I'm on Ubuntu 12.04 (Unity) with Firefox 17.0.1 instaled. after the latest update of the gnome-settings-deamon (3.4.2-0ubuntu0.5,3.4.2-0ubuntu0.6) FireFox starts taking 100% of my CPU, periodically. I have tried various things: 1) Disabled All the non standard extensions = No change to the CPU Usage 2) Disabled Flash PlugIns (also updated same time) = No change to CPU Usage 3) Disabled "Global Menu integration 3.6.4) Extension = HOOOA CPU OK !!! Any suggestion to get back global menu integration with no more CPU problems?

    Read the article

  • JBox2D Polygon Collisions Acting Strange

    - by andy
    I have been playing around with JBox2D and Slick2D and made a little demo with a ground object, a box object, and two different polygons. The problem I am facing is that the collision-detection for the polygons seems to be off (see picture below), but the box's collision works fine. My Code: Main Class package main; import org.jbox2d.common.Vec2; import org.jbox2d.dynamics.BodyType; import org.jbox2d.dynamics.World; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.SlickException; import org.newdawn.slick.state.BasicGameState; import org.newdawn.slick.state.StateBasedGame; import shapes.Box; import shapes.Polygon; public class State1 extends BasicGameState{ World world; int velocityIterations; int positionIterations; float pixelsPerMeter; int state; Box ground; Box box1; Polygon poly1; Polygon poly2; Renderer renderer; public State1(int state) { this.state = state; } @Override public void init(GameContainer gc, StateBasedGame game) throws SlickException { velocityIterations = 10; positionIterations = 10; pixelsPerMeter = 1f; world = new World(new Vec2(0.f, -9.8f)); renderer = new Renderer(gc, gc.getGraphics(), pixelsPerMeter, world); box1 = new Box(-100f, 200f, 40, 50, BodyType.DYNAMIC, world); ground = new Box(-14, -275, 50, 900, BodyType.STATIC, world); poly1 = new Polygon(50f, 10f, new Vec2[] { new Vec2(-6f, -14f), new Vec2(0f, -20f), new Vec2(6f, -14f), new Vec2(10f, 10f), new Vec2(-10f, 10f) }, BodyType.DYNAMIC, world); poly2 = new Polygon(0f, 10f, new Vec2[] { new Vec2(10f, 0f), new Vec2(20f, 0f), new Vec2(30f, 10f), new Vec2(30f, 20f), new Vec2(20f, 30f), new Vec2(10f, 30f), new Vec2(0f, 20f), new Vec2(0f, 10f) }, BodyType.DYNAMIC, world); } @Override public void update(GameContainer gc, StateBasedGame game, int delta) throws SlickException { world.step((float)delta / 180f, velocityIterations, positionIterations); } @Override public void render(GameContainer gc, StateBasedGame game, Graphics g) throws SlickException { renderer.render(); } @Override public int getID() { return this.state; } } Polygon Class package shapes; import org.jbox2d.collision.shapes.PolygonShape; import org.jbox2d.common.Vec2; import org.jbox2d.dynamics.Body; import org.jbox2d.dynamics.BodyDef; import org.jbox2d.dynamics.BodyType; import org.jbox2d.dynamics.FixtureDef; import org.jbox2d.dynamics.World; import org.newdawn.slick.Color; public class Polygon { public float x, y; public Color color; public BodyType bodyType; org.newdawn.slick.geom.Polygon poly; BodyDef def; PolygonShape ps; FixtureDef fd; Body body; World world; Vec2[] verts; public Polygon(float x, float y, Vec2[] verts, BodyType bodyType, World world) { this.verts = verts; this.x = x; this.y = y; this.bodyType = bodyType; this.world = world; init(); } public void init() { def = new BodyDef(); def.type = bodyType; def.position.set(x, y); ps = new PolygonShape(); ps.set(verts, verts.length); fd = new FixtureDef(); fd.shape = ps; fd.density = 2.0f; fd.friction = 0.7f; fd.restitution = 0.5f; body = world.createBody(def); body.createFixture(fd); } } Rendering Class package main; import org.jbox2d.collision.shapes.PolygonShape; import org.jbox2d.collision.shapes.ShapeType; import org.jbox2d.common.MathUtils; import org.jbox2d.common.Vec2; import org.jbox2d.dynamics.Body; import org.jbox2d.dynamics.Fixture; import org.jbox2d.dynamics.World; import org.newdawn.slick.Color; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.geom.Polygon; import org.newdawn.slick.geom.Transform; public class Renderer { World world; float pixelsPerMeter; GameContainer gc; Graphics g; public Renderer(GameContainer gc, Graphics g, float ppm, World world) { this.world = world; this.pixelsPerMeter = ppm; this.g = g; this.gc = gc; } public void render() { Body current = world.getBodyList(); Vec2 center = current.getLocalCenter(); while(current != null) { Vec2 pos = current.getPosition(); g.pushTransform(); g.translate(pos.x * pixelsPerMeter + (0.5f * gc.getWidth()), -pos.y * pixelsPerMeter + (0.5f * gc.getHeight())); Fixture f = current.getFixtureList(); while(f != null) { ShapeType type = f.getType(); g.setColor(getColor(current)); switch(type) { case POLYGON: { PolygonShape shape = (PolygonShape)f.getShape(); Vec2[] verts = shape.getVertices(); int count = shape.getVertexCount(); Polygon p = new Polygon(); for(int i = 0; i < count; i++) { p.addPoint(verts[i].x, verts[i].y); } p.setCenterX(center.x); p.setCenterY(center.y); p = (Polygon)p.transform(Transform.createRotateTransform(current.getAngle() + MathUtils.PI, center.x, center.y)); p = (Polygon)p.transform(Transform.createScaleTransform(pixelsPerMeter, pixelsPerMeter)); g.draw(p); break; } case CIRCLE: { f.getShape(); } default: } f = f.getNext(); } g.popTransform(); current = current.getNext(); } } public Color getColor(Body b) { Color c = new Color(1f, 1f, 1f); switch(b.m_type) { case DYNAMIC: if(b.isActive()) { c = new Color(255, 123, 0); } else { c = new Color(99, 99, 99); } break; case KINEMATIC: break; case STATIC: c = new Color(111, 111, 111); break; default: break; } return c; } } Any help with fixing the collisions would be greatly appreciated, and if you need any other code snippets I would be happy to provide them.

    Read the article

  • Information Driven Value Chains: Achieving Supply Chain Excellence in the 21st Century With Oracle -

    World-class supply chains can help companies achieve top line and bottom line results in today’s complex,global world.Tune into this conversation with Rick Jewell,SVP,Oracle Supply Chain Development,to hear about Oracle’s vision for world class SCM,and the latest and greatest on Oracle Supply Chain Management solutions.You will learn about Oracle’s complete,best-in-class,open and integrated solutions,which are helping companies drive profitability,achieve operational excellence,streamline innovation,and manage risk and compliance in today’s complex,global world.

    Read the article

  • Register Now to the New Oracle Argus Safety 7 Implementation Boot Camp in Miami, Florida - Nov 12-15, 2013!

    - by Roxana Babiciu
    Oracle's Argus Safety 7 boot camp is an instructor-led training course which provides a good understanding of how Oracle Argus Safety Standard Edition and Oracle Argus Safety Japan products addresses complex pharmacovigilance requirements and helps ensure global regulatory compliance by enabling sound safety decisions. Oracle Argus Safety's advanced database helps ensure global regulatory compliance thus in turn enabling sound safety decisions. Register now to this boot camp, a 4-day (in class) instructor led event taught using a combination of lectures and hands-on exercises.

    Read the article

  • Register Now to the New Oracle Argus Safety 7 Implementation Boot Camp - Tokyo, Japan - Dec 10-13, 2013!

    - by Roxana Babiciu
    Oracle's Argus Safety 7 boot camp is an instructor-led training course which provides a good understanding of how Oracle Argus Safety Standard Edition and Oracle Argus Safety Japan products addresses complex pharmacovigilance requirements and helps ensure global regulatory compliance by enabling sound safety decisions. Oracle Argus Safety's advanced database helps ensure global regulatory compliance thus in turn enabling sound safety decisions. Read more here. 

    Read the article

  • Shakespeare and storing Unicode characters

    - by John Paul Cook
    This post is about the political issues involved with using multiple languages in a global organization and how to troubleshoot the technical details. The CHAR and VARCHAR data types are NOT suitable for global data. Some people still cling to CHAR and VARCHAR justifying their use by truthfully saying that they only take up half the space of NCHAR and NVARCHAR data types. But you’ll never be able to store Chinese, Korean, Greek, Japanese, Arabic, or many other languages unless you use NCHAR and NVARCHAR...(read more)

    Read the article

  • Google Optimization is the Key to Online Success For Any Business

    Google is by far the most used and preferred search engine in the world. It is miles ahead of its biggest rival when it comes to the global audience and for the same reason it provides one of the most attractive platforms for business owners to promote their business to a large global audience. Since most people use Google to search for anything they want as a business owner your primary requirement is to get good ranking on this search engine more than anywhere else.

    Read the article

  • The entity type String is not part of the model for the current context error [migrated]

    - by Michael V
    I am getting the following error in my controller after the view submits the collection: The entity type String is not part of the model for the current context. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The entity type String is not part of the model for the current context. Source Error: Line 51: foreach (var survey in mysurveys) Line 52: { Line 53: db.Entry(survey).State = EntityState.Modified; Line 54: Line 55: // db.Entry(survey).State = EntityState.Modified; Here is the code ` [HttpPost] public ActionResult UpdateTest(FormCollection mysurveys) { System.Diagnostics.Debug.WriteLine("iam in test post" + mysurveys.Count); foreach (var survey in mysurveys) { db.Entry(survey).State = EntityState.Modified; } db.SaveChanges(); return View(mysurveys); } `Similar code with one record only (no foreach) works fine

    Read the article

  • repair broken packages-"dpkg: error: conflicting actions -f (--field) and -r (--remove)"

    - by yinon
    Ubuntu 12.04 LTS. if more information will be needed, tell me and'll give. the main problem is: tzach@tzach-pc:~$ sudo apt-get install docky [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done docky is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). tzach@tzach-pc:~$ and also: tzach@tzach-pc:~$ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. **The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using ******* so we tryied the guide here in messege #9: http://ubuntuforums.org/showthread.php?t=947124 we run all the first 4 commands and the last one-"sudo apt-get autoremove" gave us: tzach@tzach-pc:~$ sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: **ca-certificates-java** : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not installed or java6-runtime-headless **openjdk-7-jre-lib** : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not installed E: Unmet dependencies. Try using -f. so we run the last command twice: sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java and sudo dpkg --remove -force --force-remove-reinstreq openjdk-7-jre-lib but both of them gives: tzach@tzach-pc:~$ sudo dpkg --remove -force --force-remove-reinstreq ca-certificates-java [sudo] password for tzach: dpkg: error: conflicting actions -f (--field) and -r (--remove) Type dpkg --help for help about installing and deinstalling packages [*]; Use `dselect' or `aptitude' for user-friendly package management; Type dpkg -Dhelp for a list of dpkg debug flag values; Type dpkg --force-help for a list of forcing options; Type dpkg-deb --help for help about manipulating *.deb files; Options marked [*] produce a lot of output - pipe it through `less' or `more' ! EDIT FOR green7-output of "sudo apt-get -f install": tzach@tzach-pc:~$ sudo apt-get -f install [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java Suggested packages: default-jre equivs sun-java6-fonts ttf-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho ttf-telugu-fonts ttf-oriya-fonts ttf-kannada-fonts ttf-bengali-fonts The following packages will be REMOVED: ttf-mscorefonts-installer The following NEW packages will be installed: icedtea-7-jre-cacao icedtea-7-jre-jamvm java-common openjdk-7-jre-headless tzdata-java 0 upgraded, 5 newly installed, 1 to remove and 355 not upgraded. 5 not fully installed or removed. Need to get 0 B/29.6 MB of archives. After this operation, 88.5 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: warning: there's no installed package matching ttf-mscorefonts-installer:amd64 Setting up tzdata (2012e-0ubuntu0.12.04) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing tzdata (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: tzdata E: Sub-process /usr/bin/dpkg returned an error code (1) EDIT2 FOR green7: tzach@tzach-pc:~$ sudo apt-get remove --purge tzdata [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ca-certificates-java : Depends: openjdk-6-jre-headless (>= 6b16-1.6.1-2) but it is not going to be installed or java6-runtime-headless libc6 : Depends: tzdata but it is not going to be installed libc6:i386 : Depends: tzdata:i386 libical0 : Depends: tzdata but it is not going to be installed openjdk-7-jre-lib : Depends: openjdk-7-jre-headless (>= 7~b130~pre0) but it is not going to be installed python-dateutil : Depends: tzdata but it is not going to be installed ubuntu-minimal : Depends: tzdata but it is not going to be installed util-linux : Depends: tzdata (>= 2006c-2) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). EDIT3 FOR green7: tzach@tzach-pc:~$ sudo apt-get install openjdk-7-jre-headless [sudo] password for tzach: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: openjdk-7-jre-headless : Depends: tzdata-java but it is not going to be installed Depends: java-common (>= 0.28) but it is not going to be installed Recommends: icedtea-7-jre-cacao (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed Recommends: icedtea-7-jre-jamvm (= 7~u3-2.1.1~pre1-1ubuntu3) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). some things in the text also supposed to be bolded. but not critic (: Thanks for the editing! Thanks a lot for your assistance.

    Read the article

  • Oracle VM RAC template - what it took

    - by wcoekaer
    In my previous posting I introduced the latest Oracle Real Application Cluster / Oracle VM template. I mentioned how easy it is to deploy a complete Oracle RAC cluster with Oracle VM. In fact, you don't need any prior knowledge at all to get a complete production-ready setup going. Here is an example... I built a 4 node RAC cluster, completely configured in just over 40 minutes - starting from import template into Oracle VM, create VMs to fully up and running Oracle RAC. And what was needed? 1 textfile with some hostnames and ip addresses and deploycluster.py. The setup is a 4 node cluster where each VM has 8GB of RAM and 4 vCPUs. The shared ASM storage in this case is 100GB, 5 x 20GB volumes. The VM names are racovm.0-racovm.3. The deploycluster script starts the VMs, verifies the configuration and sends the database cluster configuration info through Oracle VM Manager to the 4 node VMs. Once the VMs are up and running, the first VM starts the actual Oracle RAC setup inside and talks to the 3 other VMs. I did not log into any VM until after everything was completed. In fact, I connected to the database remotely before logging in at all. # ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini Oracle RAC OneCommand (v1.1.0) for Oracle VM - deploy cluster - (c) 2011-2012 Oracle Corporation (com: 26700:v1.1.0, lib: 126247:v1.1.0, var: 1100:v1.1.0) - v2.4.3 - wopr8.wimmekes.net (x86_64) Invoked as root at Sat Jun 2 17:31:29 2012 (size: 37500, mtime: Wed May 16 00:13:19 2012) Using: ./deploycluster.py -u admin -H localhost --vms racovm.0,racovm.1,racovm.2,racovm.3 --netconfig ./netconfig.ini INFO: Login password to Oracle VM Manager not supplied on command line or environment (DEPLOYCLUSTER_MGR_PASSWORD), prompting... Password: INFO: Attempting to connect to Oracle VM Manager... INFO: Oracle VM Client (3.1.1.305) protocol (1.8) CONNECTED (tcp) to Oracle VM Manager (3.1.1.336) protocol (1.8) IP (192.168.1.40) UUID (0004fb0000010000cbce8a3181569a3e) INFO: Inspecting /root/rac/deploycluster/netconfig.ini for number of nodes defined... INFO: Detected 4 nodes in: /root/rac/deploycluster/netconfig.ini INFO: Located a total of (4) VMs; 4 VMs with a simple name of: ['racovm.0', 'racovm.1', 'racovm.2', 'racovm.3'] INFO: Verifying all (4) VMs are in Running state INFO: VM with a simple name of "racovm.0" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.1" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.2" is in a Stopped state, attempting to start it...OK. INFO: VM with a simple name of "racovm.3" is in a Stopped state, attempting to start it...OK. INFO: Detected that all (4) VMs specified on command have (5) common shared disks between them (ASM_MIN_DISKS=5) INFO: The (4) VMs passed basic sanity checks and in Running state, sending cluster details as follows: netconfig.ini (Network setup): /root/rac/deploycluster/netconfig.ini buildcluster: yes INFO: Starting to send cluster details to all (4) VM(s)....... INFO: Sending to VM with a simple name of "racovm.0".... INFO: Sending to VM with a simple name of "racovm.1"..... INFO: Sending to VM with a simple name of "racovm.2"..... INFO: Sending to VM with a simple name of "racovm.3"...... INFO: Cluster details sent to (4) VMs... Check log (default location /u01/racovm/buildcluster.log) on build VM (racovm.0)... INFO: deploycluster.py completed successfully at 17:32:02 in 33.2 seconds (00m:33s) Logfile at: /root/rac/deploycluster/deploycluster2.log my netconfig.ini # Node specific information NODE1=db11rac1 NODE1VIP=db11rac1-vip NODE1PRIV=db11rac1-priv NODE1IP=192.168.1.56 NODE1VIPIP=192.168.1.65 NODE1PRIVIP=192.168.2.2 NODE2=db11rac2 NODE2VIP=db11rac2-vip NODE2PRIV=db11rac2-priv NODE2IP=192.168.1.58 NODE2VIPIP=192.168.1.66 NODE2PRIVIP=192.168.2.3 NODE3=db11rac3 NODE3VIP=db11rac3-vip NODE3PRIV=db11rac3-priv NODE3IP=192.168.1.173 NODE3VIPIP=192.168.1.174 NODE3PRIVIP=192.168.2.4 NODE4=db11rac4 NODE4VIP=db11rac4-vip NODE4PRIV=db11rac4-priv NODE4IP=192.168.1.175 NODE4VIPIP=192.168.1.176 NODE4PRIVIP=192.168.2.5 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=wimmekes.net DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=db11vip SCANIP=192.168.1.57 last few lines of the in-VM log file : 2012-06-02 14:01:40:[clusterstate:Time :db11rac1] Completed successfully in 2 seconds (0h:00m:02s) 2012-06-02 14:01:40:[buildcluster:Done :db11rac1] Build 11gR2 RAC Cluster 2012-06-02 14:01:40:[buildcluster:Time :db11rac1] Completed successfully in 1779 seconds (0h:29m:39s) From start_vm to completely configured : 29m:39s. The other 10m was the import template and create 4 VMs from template along with the shared storage configuration. This consists of a complete Oracle 11gR2 RAC database with ASM, CRS and the RDBMS up and running on all 4 nodes. Simply connect and use. Production ready. Oracle on Oracle.

    Read the article

  • What is involved for a simple UDP game?

    - by acidzombie24
    I once tried to write a simple game with UDP in a week as a throwaway test. It went horribly. I threw it away early. The main problem i had was restoring the game state of all players/enemies/objects to an old state and fast forward the game to the point of time the player is playing (ie half a second before a jump. A little early or late can make the player miss the jump) Maybe this method is not the easiest way? i suspect it to be but i designed it wrong from the beginning and realized at the end of 2nd day. (so i didnt learn too much or wasted that much time) For myself and others, What is involved for a simple UDP game and how do i write one? Or how do i solve the prediction problem restoring to state properly. I'll mark this as CW bc i know there will be lots of helpful answers.

    Read the article

  • What's going on with my wireless?

    - by Mark Scott
    The WiFi on my Acer laptop (it's a 3810TZ, with Intel Corporation Centrino Wireless-N 1000) works flawlessly on Ubuntu 11.04. On 11.10, it's continually up and down, and it fills the system log with messages such as those below. What is going on? It seems to be unable to decide which regulatory domain it's in. Despite the system configuration being quite clearly set to UK it persists in configuring itself as though it was opeating in Taiwan! Nov 22 15:34:37 MES3810 wpa_supplicant[1053]: WPA: 4-Way Handshake failed - pre-shared key may be incorrect Nov 22 15:34:37 MES3810 wpa_supplicant[1053]: CTRL-EVENT-DISCONNECTED bssid=00:50:7f:72:bf:b0 reason=15 Nov 22 15:34:37 MES3810 wpa_supplicant[1053]: CTRL-EVENT-DISCONNECTED bssid=00:00:00:00:00:00 reason=3 Nov 22 15:34:37 MES3810 kernel: [18239.240355] cfg80211: All devices are disconnected, going to restore regulatory settings Nov 22 15:34:37 MES3810 kernel: [18239.240362] cfg80211: Restoring regulatory settings Nov 22 15:34:37 MES3810 kernel: [18239.240368] cfg80211: Calling CRDA to update world regulatory domain Nov 22 15:34:37 MES3810 kernel: [18239.240408] wlan0: deauthenticating from 00:50:7f:72:bf:b0 by local choice (reason=3) Nov 22 15:34:37 MES3810 NetworkManager[875]: (wlan0): supplicant interface state: 4-way handshake - disconnected Nov 22 15:34:37 MES3810 kernel: [18239.246556] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain Nov 22 15:34:37 MES3810 kernel: [18239.246563] cfg80211: World regulatory domain updated: Nov 22 15:34:37 MES3810 kernel: [18239.246567] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Nov 22 15:34:37 MES3810 kernel: [18239.246572] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.246577] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.246582] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.246587] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.246592] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) Nov 22 15:34:37 MES3810 NetworkManager[875]: (wlan0): supplicant interface state: disconnected - scanning Nov 22 15:34:37 MES3810 wpa_supplicant[1053]: Trying to authenticate with 00:50:7f:72:bf:b0 (SSID='PoplarHouse' freq=2412 MHz) Nov 22 15:34:37 MES3810 NetworkManager[875]: (wlan0): supplicant interface state: scanning - authenticating Nov 22 15:34:37 MES3810 kernel: [18239.509877] wlan0: authenticate with 00:50:7f:72:bf:b0 (try 1) Nov 22 15:34:37 MES3810 wpa_supplicant[1053]: Trying to associate with 00:50:7f:72:bf:b0 (SSID='PoplarHouse' freq=2412 MHz) Nov 22 15:34:37 MES3810 kernel: [18239.512276] wlan0: authenticated Nov 22 15:34:37 MES3810 kernel: [18239.512615] wlan0: associate with 00:50:7f:72:bf:b0 (try 1) Nov 22 15:34:37 MES3810 NetworkManager[875]: (wlan0): supplicant interface state: authenticating - associating Nov 22 15:34:37 MES3810 kernel: [18239.516508] wlan0: RX ReassocResp from 00:50:7f:72:bf:b0 (capab=0x431 status=0 aid=1) Nov 22 15:34:37 MES3810 kernel: [18239.516514] wlan0: associated Nov 22 15:34:37 MES3810 wpa_supplicant[1053]: Associated with 00:50:7f:72:bf:b0 Nov 22 15:34:37 MES3810 kernel: [18239.529097] cfg80211: Calling CRDA for country: TW Nov 22 15:34:37 MES3810 NetworkManager[875]: (wlan0): supplicant interface state: associating - associated Nov 22 15:34:37 MES3810 kernel: [18239.535680] cfg80211: Updating information on frequency 2412 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535688] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535692] cfg80211: Updating information on frequency 2417 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535697] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535702] cfg80211: Updating information on frequency 2422 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535707] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535711] cfg80211: Updating information on frequency 2427 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535716] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535720] cfg80211: Updating information on frequency 2432 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535725] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535730] cfg80211: Updating information on frequency 2437 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535735] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535739] cfg80211: Updating information on frequency 2442 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535744] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535748] cfg80211: Updating information on frequency 2447 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535753] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535757] cfg80211: Updating information on frequency 2452 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535763] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535767] cfg80211: Updating information on frequency 2457 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535772] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535777] cfg80211: Updating information on frequency 2462 MHz for a 20 MHz width channel with regulatory rule: Nov 22 15:34:37 MES3810 kernel: [18239.535782] cfg80211: 2402000 KHz - 2472000 KHz @ KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535786] cfg80211: Disabling freq 2467 MHz Nov 22 15:34:37 MES3810 kernel: [18239.535789] cfg80211: Disabling freq 2472 MHz Nov 22 15:34:37 MES3810 kernel: [18239.535794] cfg80211: Regulatory domain changed to country: TW Nov 22 15:34:37 MES3810 kernel: [18239.535797] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) Nov 22 15:34:37 MES3810 kernel: [18239.535802] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535807] cfg80211: (5270000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) Nov 22 15:34:37 MES3810 kernel: [18239.535812] cfg80211: (5735000 KHz - 5815000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) Nov 22 15:34:38 MES3810 NetworkManager[875]: (wlan0): supplicant interface state: associated - 4-way handshake Any ideas?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >