Search Results

Search found 32223 results on 1289 pages for 'sql 2012'.

Page 836/1289 | < Previous Page | 832 833 834 835 836 837 838 839 840 841 842 843  | Next Page >

  • Sharepoint Foundation 2010 installation problems

    - by Robert Koritnik
    I'm having problems installing development machine for Sharepoint (Foundation) 2010. This is what I did so far on the same machine: Installed a clean Windows 7 x64 with 4GB of RAM without being part of any domain. Just a simple standalone machine. Enabled IIS related features as described here except IIS6 related ones (two of them) Installed SQL Server 2008 R2 Development Edition (DB Engine and Writer being enabled but not SQL Agent) Installed Visual Studio 2010 Premium Started installing Sharepoint Foundation 2010 with first extracting files, changing config to enable Windows 7 installation and then installed it as Server Farm (then Complete) to avoid installing SQL Express. Created a separate SPF_CONFIG local user with Logon on as a service right. Opened SPF Management Shell and run New-SPConfigurationDatabase so I am able to use a non-domain username (SPF_CONFIG that I created in the previous step) But all I get is this: The outcome after this error is: Database Sharepoint2010Config is created User SPF_CONFIG is added to SQL Server and attached to this newly created database as dbowner and checking SQL server security logins this user has following rights: dbcreator securityadmin public

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Notepad++ Associate custom ext for use by ctags

    - by user185420
    Hi, I use .pkb and pkh extensions for PL/SQL files (rather than .sql). I have already associated .pkb and .pkh extension in the langs.xml (Styler Config) so that Notepad++ recognizes the syntax highlight to use when I open pkb and pkh files (the syntax highlighting would be the same as for sql ext). Now the problem is that I cannot use plugins that parse code using ctags (like SourceCookifier, OpenCtag,GtagSearch) because these plugins do not recognize the pkb and pkh extension. THe only way I could use them is to change my file to sql and when done change to the pkh or pkb ext. Is there any config that could make ctags based plugin work with non-sql ext? I tried changing various config for the plugins but did not succeed in making them work. Thanks.

    Read the article

  • DataSource or ConnectionPoolDataSource for Application Server JDBC resources

    - by Vinnie
    When creating JNDI JDBC connection pools in an application server, I always specified the type as javax.sql.ConnectionPoolDataSource. I never really gave it too much thought as it always seemed natural to prefer pooled connections over non-pooled. However, in looking at some examples (specifically for Tomcat) I noticed that they specify javax.sql.DataSource. Further, it seems there are settings for maxIdle and maxWait giving the impression that these connections are pooled as well. Glassfish also allows these parameters regardless of the type of data source selected. Are javax.sql.DataSource pooled in an application server (or servlet container)? What (if any) advantages are there for choosing javax.sql.ConnectionPoolDataSource over javax.sql.DataSource (or vice versa)?

    Read the article

  • Migrate a Django project from MySQL to Oracle

    - by pablo
    Hi, I have a Django1.1 project that works with a legacy MySQL db. I'm trying to migrate this project to Oracle (xe and 11g). We have two options for the migration: - Use SQL developer to create a migration sql script. - Use Django fixtures. The schema created with the sql script from sql developer doesn't match the schema created from syncdb. For example, Django expects TIMESTAMP columns while sql developer creates DATE columns. Using syncdb with Django fixtures could be great but when trying to load the MySQL fixtures into Oracle, after using syncdb, I'm getting: IntegrityError: ORA-00001: unique constraint (USER.SYS_C004253) violated How can I find what part create the integrity error? Thanks

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • World Backup Day

    - by red(at)work
    Here at Red Gate Towers, the SQL Backup development team have been hunkered down in their shed for the last few months, with the toolbox, blowtorch and chamois leather out, upgrading SQL Backup. When we started, autumn leaves were falling. Now we're about to finish, spring flowers are budding. If not quite a gleaming new machine, at the very least a familiar, reliable engine with some shiny new bits on it will trundle magnificently out of the workshop. One of the interesting things I've noticed about working on software development teams is that the team is together for so long 'implementing' stuff - designing, coding, testing, fixing bugs and so on - that you occasionally forget why you're doing what you're doing. Doubt creeps in. It feels like a long time since we launched this project in a fanfare of optimism and enthusiasm, and all that clarity of purpose and mission "yee-haw" has dissipated with the daily pressures of development. Every now and again, we look up from our bunker and notice all those thousands of users out there, with their different configurations and working practices and each with their own set of problems and requirements, and we ask ourselves "does anyone care about what we're doing?" Has the world moved on while we've been busy? Could we have been doing something more useful with the time and talent of all these excellent people we've assembled? In truth, you can research and test and validate all you like, but you never really know if you've done the right thing (or at least, something valuable for some users) until you release. All projects suffer this insecurity. If they don't, maybe you're not worrying enough about what you're building. The two enemies of software development are certainty and complacency. Oh, and of course, rival teams with Nerf guns. The goal of SQL Backup 7 is to make it so easy to schedule regular restores of your backups that you have no excuse not to. Why schedule a restore? Because your data is not as good as your last backup. It's only as good as your last successful restore. If you're not checking your backups by restoring them and running an integrity check on the database, you're only doing half the job. It seems that most DBAs know that this is best practice, but it can be tricky and time-consuming to set up, so it's one of those tasks that can get forgotten in the midst all the other demands on their time. Sometimes, they're just too busy firefighting. But if it was simple to do? That was our inspiration for SQL Backup 7. So it was heartening to read Brent Ozar's blog post the other day about World Backup Day. To be honest, I'd never heard of World Backup Day (Talk Like a Pirate Day, yes, but not this one); however, its emphasis on not just backing up your data but checking the validity of those backups was exactly the same message we had in mind when building SQL Backup 7. It's printed on a piece of A3 above our planning board - "Make backup verification so easy to do that no DBA has an excuse for not doing it" It's the missing piece that completes the puzzle. Simple idea, great concept, useful feature, but, as it turned out, far from straightforward to implement. The problem is the future. As Marty McFly discovered over the course of three movies, the future is uncertain and hard to predict - so when you are scheduling a restore to take place an hour, day, week or month after the backup, there are all kinds of questions that you wouldn't normally have to consider. Where will this backup live? Will it even exist at the time? Will it be split into multiple files? What will the file names be? Will it be encrypted? What files should it be restored to? SQL Backup needs to know what to expect at the time the restore job is actually run. Of course, a DBA will know the answer to all these questions, but to deliver the whole point of version 7, we wanted to make it easy for them to input that information into SQL Backup. We think we've done that. When you create your scheduled backup job, there is now an option to create a "reminder" to follow it up with a scheduled restore to verify the resulting backups. Actually, it's much more than a reminder, as it stores all the relevant data so you can click it and pre-populate the wizard with all the right settings to set up your verification restores. Simple. But, what do you think? We'd love you to try it. Post by Brian Harris

    Read the article

  • Is MS Reporting Services suitable for stand-alone reports?

    - by JMarsch
    Hello all: I work for a ISV. Our product can use both SQL Server and Oracle as its back-end server. It includes a number of reports (currently in Crystal). We are investigating moving to Micrsoft Reporting Services, but I'm beginning to think that it's a bad idea. We want for our reports to look and feel as though they are a part of our application, and we will not require SQL Server (the customer can choose Oracle). Although I see the reporting services supports a stand-alone mode (RDLC), the boundry between what requires SQL server and what doesn't looks extremely ambiguous. (example, the stand-alone report builder appears to require SQL Server, most of the documentation appears to be part of SQL Server's documentation) It looks to me like if I want to keep my application DB-agnostic, I had better steer clear of Reporting Services. Have I missed the boat here?

    Read the article

  • Rails 3 Join Question for Votes Table

    - by Dex
    I have a table posts and a polymorphic table votes. The votes table looks like this: create_table :votes do |t| t.references :user # user_id t.vote # the vote value t.references :votable # votable_type and votable_id end I want to list all posts that the user has not yet voted on. Right now I'm basically taking all the posts they've already voted on and subtracting that from the entire set of posts. It works but it's not very convenient as I currently have it. def self.where_not_voted_on_by(user) sql = "SELECT P.* FROM posts P LEFT OUTER JOIN (" sql << where_voted_on_by(user).to_sql sql << ") ALREADY_VOTED_FOR ON P.id = ALREADY_VOTED_FOR.id WHERE (user_id is null)" puts sql resultset = connection.select_all(sql) results = [] resultset.each do |r| results << Post.new(r) end results end def self.where_voted_on_by(user) joins(:votes.outer).where("user_id = #{user.id}").select("posts.*, votes.user_id") end

    Read the article

  • Jet Database (ms access) ExecuteNonQuery - Can I make it faster?

    - by bluebill
    Hi all, I have this generic routine that I wrote that takes a list of sql strings and executes them against the database. Is there any way I can make this work faster? Typically it'll see maybe 200 inserts or deletes or updates at a time. Sometimes there is a mixture of updates, inserts and deletes. Would it be a good idea to separate the queries by type (i.e. group inserts together, then updates and then deletes)? I am running this against an ms access database and using vb.net 2005. Public Function ExecuteNonQuery(ByVal sql As List(Of String), ByVal dbConnection as String) As Integer If sql Is Nothing OrElse sql.Count = 0 Then Return 0 Dim recordCount As Integer = 0 Using connection As New OleDb.OleDbConnection(dbConnection) connection.Open() Dim transaction As OleDb.OleDbTransaction = connection.BeginTransaction() 'Using cmd As New OleDb.OleDbCommand() Using cmd As OleDb.OleDbCommand = connection.CreateCommand cmd.Connection = connection cmd.Transaction = transaction For Each s As String In sql If Not String.IsNullOrEmpty(s) Then cmd.CommandText = s recordCount += cmd.ExecuteNonQuery() End If Next transaction.Commit() End Using End Using Return recordCount End Function

    Read the article

  • form_dropdown in codeigniter

    - by Patrick
    I'm getting a strange behaviour from form_dropdown - basically, when I reload the page after validation, the values are screwed up. this bit generates 3 drop downs with days, months and years: $days = array(0 => 'Day...'); for ($i = 1; $i <= 31; $i++) { $days[] = $i; } $months = array(0 => 'Month...', ); for ($i = 1; $i <= 12; $i++) { $months[] = $i; } $years = array(0 => 'Year...'); for ($i = 2010; $i <= 2012; $i++) { $years[$i] = $i; echo "<pre>"; print_r($years); echo "</pre>";//remove this } $selected_day = (isset($selected_day)) ? $selected_day : 0; $selected_month = (isset($selected_month)) ? $selected_month : 0; $selected_year = (isset($selected_year)) ? $selected_year : 0; echo "<p>"; echo form_label('Select date:', 'day', array('class' => 'left')); echo form_dropdown('day', $days, $selected_day, 'class="combosmall"'); echo form_dropdown('month', $months, $selected_month, 'class="combosmall"'); echo form_dropdown('year', $years, $selected_year, 'class="combosmall"'); echo "</p>"; ...and generates this: <p><label for="day" class="left">Select date:</label><select name="day" class="combosmall"> <option value="0" selected="selected">Day...</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> <option value="7">7</option> <option value="8">8</option> <option value="9">9</option> <option value="10">10</option> <option value="11">11</option> <option value="12">12</option> <option value="13">13</option> <option value="14">14</option> <option value="15">15</option> <option value="16">16</option> <option value="17">17</option> <option value="18">18</option> <option value="19">19</option> <option value="20">20</option> <option value="21">21</option> <option value="22">22</option> <option value="23">23</option> <option value="24">24</option> <option value="25">25</option> <option value="26">26</option> <option value="27">27</option> <option value="28">28</option> <option value="29">29</option> <option value="30">30</option> <option value="31">31</option> </select><select name="month" class="combosmall"> <option value="0" selected="selected">Month...</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> <option value="7">7</option> <option value="8">8</option> <option value="9">9</option> <option value="10">10</option> <option value="11">11</option> <option value="12">12</option> </select><select name="year" class="combosmall"> <option value="0" selected="selected">Year...</option> <option value="2010">2010</option> <option value="2011">2011</option> <option value="2012">2012</option> </select></p> however, when the form is reloaded after validation, the same code above generates this: <!-- days and months... --> <select name="year" class="combosmall"> <option value="0" selected="selected">Year...</option> <option value="1">2010</option> <option value="2">2011</option> <option value="3">2012</option> </select> So basically the value start from 1 instead of 2010. The same happens to days and months but obviously it doesn't make any difference in this particular case as the values would start from 1 anyway. How can I fix this - and why does it happen?

    Read the article

  • form_dropdown in codeigniter

    - by Patrick
    I'm getting a strange behaviour from form_dropdown - basically, when I reload the page after validation, the values are screwed up. this bit generates 3 drop downs with days, months and years: $days = array(0 => 'Day...'); for ($i = 1; $i <= 31; $i++) { $days[] = $i; } $months = array(0 => 'Month...', ); for ($i = 1; $i <= 12; $i++) { $months[] = $i; } $years = array(0 => 'Year...'); for ($i = 2010; $i <= 2012; $i++) { $years[$i] = $i; echo "<pre>"; print_r($years); echo "</pre>";//remove this } $selected_day = (isset($selected_day)) ? $selected_day : 0; $selected_month = (isset($selected_month)) ? $selected_month : 0; $selected_year = (isset($selected_year)) ? $selected_year : 0; echo "<p>"; echo form_label('Select date:', 'day', array('class' => 'left')); echo form_dropdown('day', $days, $selected_day, 'class="combosmall"'); echo form_dropdown('month', $months, $selected_month, 'class="combosmall"'); echo form_dropdown('year', $years, $selected_year, 'class="combosmall"'); echo "</p>"; ...and generates this: <p><label for="day" class="left">Select date:</label><select name="day" class="combosmall"> <option value="0" selected="selected">Day...</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> <option value="7">7</option> <option value="8">8</option> <option value="9">9</option> <option value="10">10</option> <option value="11">11</option> <option value="12">12</option> <option value="13">13</option> <option value="14">14</option> <option value="15">15</option> <option value="16">16</option> <option value="17">17</option> <option value="18">18</option> <option value="19">19</option> <option value="20">20</option> <option value="21">21</option> <option value="22">22</option> <option value="23">23</option> <option value="24">24</option> <option value="25">25</option> <option value="26">26</option> <option value="27">27</option> <option value="28">28</option> <option value="29">29</option> <option value="30">30</option> <option value="31">31</option> </select><select name="month" class="combosmall"> <option value="0" selected="selected">Month...</option> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> <option value="7">7</option> <option value="8">8</option> <option value="9">9</option> <option value="10">10</option> <option value="11">11</option> <option value="12">12</option> </select><select name="year" class="combosmall"> <option value="0" selected="selected">Year...</option> <option value="2010">2010</option> <option value="2011">2011</option> <option value="2012">2012</option> </select></p> however, when the form is reloaded after validation, the same code above generates this: <!-- days and months... --> <select name="year" class="combosmall"> <option value="0" selected="selected">Year...</option> <option value="1">2010</option> <option value="2">2011</option> <option value="3">2012</option> </select> So basically the value start from 1 instead of 2010. The same happens to days and months but obviously it doesn't make any difference in this particular case as the values would start from 1 anyway. How can I fix this - and why does it happen? edit: validation rules are: $this->load->library('form_validation'); //...rules for other fields.. $this->form_validation->set_rules('day', 'day', 'required|xss_clean'); $this->form_validation->set_rules('month', 'month', 'required|xss_clean'); $this->form_validation->set_rules('year', 'year', 'required|xss_clean'); $this->form_validation->set_error_delimiters('<p class="error">', '</p>'); //define other errors if($this->input->post('day') == 0 || $this->input->post('month') == 0 || $this->input->post('year') == 0) { $data['error'] = "Please check the date of your event."; }

    Read the article

  • Book Review (Book 10) - The Information: A History, a Theory, a Flood

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for March 2012 was: The Information: A History, a Theory, a Flood by James Gleick. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: My personal belief about computing is this: All computing technology is simply re-arranging data. We take data in, we manipulate it, and we send it back out. That’s computing. I had heard from some folks about this book and it’s treatment of data. I heard that it dealt with the basics of data - and the semantics of data, information and so on. It also deals with the earliest forms of history of information, which fascinates me. It’s similar I was told, to GEB which a favorite book of mine as well, so that was a bonus. Some folks I talked to liked it, some didn’t - so I thought I would check it out. What I learned: I liked the book. It was longer than I thought - took quite a while to read, even though I tend to read quickly. This is the kind of book you take your time with. It does in fact deal with the earliest forms of human interaction and the basics of data. I learned, for instance, that the genesis of the binary communication system is based in the invention of telegraph (far-writing) codes, and that the earliest forms of communication were expensive. In fact, many ciphers were invented not to hide military secrets, but to compress information. A sort of early “lol-speak” to keep the cost of transmitting data low! I think the comparison with GEB is a bit over-reaching. GEB is far more specific, fanciful and so on. In fact, this book felt more like something fro Richard Dawkins, and tended to wander around the subject quite a bit. I imagine the author doing his research and writing each chapter as a book that followed on from the last one. This is what possibly bothered those who tended not to like it, I think. Towards the middle of the book, I think the author tended to be a bit too fragmented even for me. He began to delve into memes, biology and more - I think he might have been better off breaking that off into another work. The existentialism just seemed jarring. All in all, I liked the book. I recommend it to any technical professional, specifically ones involved with data technology in specific. And isn’t that all of us? :)

    Read the article

  • Book Review (Book 11) - Applied Architecture Patterns on the Microsoft Platform

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for April 2012 was: Applied Architecture Patterns on the Microsoft Platform. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: I actually know a few of the authors on this book, so when they told me about it I wanted to check it out. The premise of the book is exactly as it states in the title - to learn how to solve a problem using products from Microsoft. What I learned: I liked the book - a lot. They've arranged the content in a "Solution Decision Framework", that presents a few elements to help you identify a need and then propose alternate solutions to solve them, and then the rationale for the choice. But the payoff is that the authors then walk through the solution they implement and what they ran into doing it. I really liked this approach. It's not a huge book, but one I've referred to again since I've read it. It's fairly comprehensive, and includes server-oriented products, not things like Microsoft Office or other client-side tools. In fact, I would LOVE to have a work like this for Open Source and other vendors as well - would make for a great library for a Systems Architect. This one is unashamedly aimed at the Microsoft products, and even if I didn't work here, I'd be fine with that. As I said, it would be interesting to see some books on other platforms like this, but I haven't run across something that presents other systems in quite this way. And that brings up an interesting point - This book is aimed at folks who create solutions within an organization. It's not aimed at Administrators, DBA's, Developers or the like, although I think all of those audiences could benefit from reading it. The solutions are made up, and not to a huge level of depth - nor should they be. It's a great exercise in thinking these kinds of things through in a structured way. The information is a bit dated, especially for Windows and SQL Azure. While the general concepts hold, the cloud platform from Microsoft is evolving so quickly that any printed book finds it hard to keep up with the improvements. I do have one quibble with the text - the chapters are a bit uneven. This is always a danger with multiple authors, but it shows up in a couple of chapters. I winced at one of the chapters that tried to take a more conversational, humorous style. This kind of academic work doesn't lend itself to that style. I recommend you get the book - and use it. I hope they keep it updated - I'll be a frequent customer. :)  

    Read the article

  • Database unit testing is now available for SSDT

    - by jamiet
    Good news was announced yesterday for those that are using SSDT and want to write unit tests, unit testing functionality is now available. The announcement was made on the SSDT team blog in post Available Today: SSDT—December 2012. Here are a few thoughts about this news. Firstly, there seems to be a general impression that database unit testing was not previously available for SSDT – that’s not entirely true. Database unit testing was most recently delivered in Visual Studio 2010 and any database unit tests written therein work perfectly well against SQL Server databases created using SSDT (why wouldn’t they – its just a database after all). In other words, if you’re running SSDT inside Visual Studio 2010 then you could carry on freely writing database unit tests; some of the tight integration between the two (e.g. right-click on an object in SQL Server Object Explorer and choose to create a unit test) was not there – but I’ve never found that to be a problem. I am currently working on a project that uses SSDT for database development and have been happily running VS2010 database unit tests for a few months now. All that being said, delivery of database unit testing for SSDT is now with us and that is good news, not least because we now have the ability to create unit tests in VS2012. We also get tight integration with SSDT itself, the like of which I mentioned above. Having now had a look at the new features I was delighted to find that one of my big complaints about database unit testing has been solved. As I reported here on Connect a refactor operation would cause unit test code to get completely mangled. See here the before and after from such an operation: SELECT    * FROM    bi.ProcessMessageLog pml INNER JOIN bi.[LogMessageType] lmt     ON    pml.[LogMessageTypeId] = lmt.[LogMessageTypeId] WHERE    pml.[LogMessage] = 'Ski[LogMessageTypeName]of message: IApplicationCanceled' AND        lmt.[LogMessageType] = 'Warning'; which is obviously not ideal. Thankfully that seems to have been solved with this latest release. One disappointment about this new release is that the process for running tests as part of a CI build has not changed from the horrendously complicated process required previously. Check out my blog post Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude] for instructions on how to do it. In that blog post I describe it as “fiddly” – I was being kind when I said that! @Jamiet

    Read the article

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • A marketing view of SQLBits

    - by simonsabin
    One of the most enjoyable things about SQLBits is reading all the blog posts/reviews after the event. We are normally fortunate enough not to have pissed anyone off and so the posts are generally positive. Most pieces are from attendees however this piece is from a wife of a sponsor http://iknowmarketing.wordpress.com/2012/04/02/events-community-vs-commercial/ . Its a nice piece for us as it highlights how we work and that how we focus on making the event personal for all those involved. We deal...(read more)

    Read the article

  • SSAS processing error: Client unable to establish connection; 08001; Encryption not supported on the client.; 08001

    - by Kevin Shyr
    After getting the cube to successfully deploy and process on Friday, I was baffled on Monday that the newly added dimension caused the cube processing to break.  I then followed the first instinct, discarded all my changes to reverted back to the version on Friday, and had no luck.  The error message (attached below) did not help as I was looking for some kind of SQL service error.  After examining the windows server log and the SQL server log, I just couldn't see anything wrong with it.After swearing for some time, and with the help of going off and working on something else for a while.  I came back to the solution and looked at the data source.  Even though I know I have never changed the provider (the default setup gave me SQL native client), I decided to change it and give OLE DB a try.This simple change allows my cube to process successfully again.  While I don't understand why the same settings that worked last week doesn't work this week, I don't have all the information to say with certainty that nothing has changed in the environment (firewall changes, server updates, etc.).SSAS processing error:<Batch >  <Parallel>    <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">      <Object>        <DatabaseID>DWH Sales Facts</DatabaseID>        <CubeID>DWH Sales Facts</CubeID>      </Object>      <Type>ProcessFull</Type>      <WriteBackTableCreation>UseExisting</WriteBackTableCreation>    </Process>  </Parallel></Batch>                Processing Dimension 'Date' completed.                                Errors and Warnings from Response                OLE DB error: OLE DB or ODBC error: A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.; 08001; Client unable to establish connection; 08001; Encryption not supported on the client.; 08001.                Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'DWH Sales Facts', Name of 'DWH Sales Facts'.                Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Currency', Name of 'Currency' was being processed.                Errors in the OLAP storage engine: An error occurred while the 'Currency Dim ID' attribute of the 'Currency' dimension from the 'DWH Sales Facts' database was being processed.                Internal error: The operation terminated unsuccessfully.                Server: The operation has been cancelled.

    Read the article

  • Intermittent wired network issues in 14.04

    - by Tommy Brunn
    Since yesterday, my wired network connection has been dropping for a couple of seconds every 30 seconds or so. To my knowledge, I had not made any changes to my network. Output of ifconfig -a: ? ~ ifconfig -a eth0 Link encap:Ethernet HWaddr 6c:f0:49:b9:b1:7f inet addr:192.168.0.16 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:feb9:b17f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11597 errors:0 dropped:0 overruns:0 frame:0 TX packets:9783 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10101682 (10.1 MB) TX bytes:1215142 (1.2 MB) Interrupt:48 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:96691 errors:0 dropped:0 overruns:0 frame:0 TX packets:96691 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:13594355 (13.5 MB) TX bytes:13594355 (13.5 MB) lspci |grep Ethernet: 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03) Pinging my router: ? ~ ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.435 ms 64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=0.571 ms ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable 64 bytes from 192.168.0.1: icmp_seq=8 ttl=64 time=1.03 ms And the output of route: ? ~ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 192.168.0.0 * 255.255.255.0 U 1 0 0 eth0 Some messages from /var/logs/syslog: ? ~ tail -f /var/log/syslog Jun 6 10:37:34 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:34 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:37:37 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:37:37 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:37:39 lolbox dhclient: XMT: Solicit on eth0, interval 8660ms. Jun 6 10:37:39 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:39 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:37:47 lolbox dhclient: XMT: Solicit on eth0, interval 16820ms. Jun 6 10:37:47 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:47 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:04 lolbox dhclient: XMT: Solicit on eth0, interval 34410ms. Jun 6 10:38:04 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:04 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> (eth0): DHCPv6 request timed out. Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13045 Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) scheduled... Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) started... Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): device state change: activated -> failed (reason 'ip-config-unavailable') [100 120 5] Jun 6 10:38:16 lolbox NetworkManager[862]: <info> NetworkManager state is now DISCONNECTED Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> Activation (eth0) failed for connection 'Wired connection 1' Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) complete. Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): device state change: failed -> disconnected (reason 'none') [120 30 0] Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): deactivating device (reason 'none') [0] Jun 6 10:37:34 lolbox whoopsie[1133]: online Jun 6 10:38:16 lolbox whoopsie[1133]: offline Jun 6 10:38:16 lolbox dbus[485]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Jun 6 10:38:16 lolbox dbus[485]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13044 Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> DNS: plugin dnsmasq update failed Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Removing DNS information from /sbin/resolvconf Jun 6 10:38:16 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:38:16 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:16 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:38:16 lolbox avahi-daemon[619]: Withdrawing address record for 192.168.0.16 on eth0. Jun 6 10:38:16 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:38:16 lolbox avahi-daemon[619]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 6 10:38:16 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:38:17 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:17 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:38:17 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:38:18 lolbox dnsmasq[1138]: no servers found in /var/run/dnsmasq/resolv.conf, will retry Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Auto-activating connection 'Wired connection 1'. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) starting connection 'Wired connection 1' Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTING Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Jun 6 10:38:18 lolbox NetworkManager[862]: <info> dhclient started with pid 13160 Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv6 transaction (timeout in 45 seconds) Jun 6 10:38:18 lolbox NetworkManager[862]: <info> dhclient started with pid 13161 Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Jun 6 10:38:18 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:38:18 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:18 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:38:18 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:38:18 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:38:18 lolbox dhclient: All rights reserved. Jun 6 10:38:18 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:38:18 lolbox dhclient: Jun 6 10:38:19 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:38:19 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:38:19 lolbox dhclient: All rights reserved. Jun 6 10:38:19 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:38:19 lolbox dhclient: Jun 6 10:38:19 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed nbi -> preinit Jun 6 10:38:19 lolbox dhclient: Bound to *:546 Jun 6 10:38:19 lolbox dhclient: Listening on Socket/eth0 Jun 6 10:38:19 lolbox dhclient: Sending on Socket/eth0 Jun 6 10:38:19 lolbox NetworkManager[862]: <info> (eth0): DHCPv6 state changed nbi -> preinit6 Jun 6 10:38:19 lolbox dhclient: Listening on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:38:19 lolbox dhclient: Sending on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:38:19 lolbox dhclient: Sending on Socket/fallback Jun 6 10:38:19 lolbox dhclient: DHCPREQUEST of 192.168.0.16 on eth0 to 255.255.255.255 port 67 (xid=0x3fc9376d) Jun 6 10:38:19 lolbox dhclient: XMT: Solicit on eth0, interval 1020ms. Jun 6 10:38:19 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:38:19 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:38:20 lolbox dhclient: DHCPACK of 192.168.0.16 from 192.168.0.1 Jun 6 10:38:20 lolbox dhclient: bound to 192.168.0.16 -- renewal in 41481 seconds. Jun 6 10:38:20 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed preinit -> reboot Jun 6 10:38:20 lolbox NetworkManager[862]: <info> address 192.168.0.16 Jun 6 10:38:20 lolbox NetworkManager[862]: <info> prefix 24 (255.255.255.0) Jun 6 10:38:20 lolbox NetworkManager[862]: <info> gateway 192.168.0.1 Jun 6 10:38:20 lolbox NetworkManager[862]: <info> nameserver '83.255.245.11' Jun 6 10:38:20 lolbox NetworkManager[862]: <info> nameserver '193.150.193.150' Jun 6 10:38:20 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Jun 6 10:38:20 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Jun 6 10:38:20 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:38:20 lolbox avahi-daemon[619]: New relevant interface eth0.IPv4 for mDNS. Jun 6 10:38:20 lolbox avahi-daemon[619]: Registering new address record for 192.168.0.16 on eth0.IPv4. Jun 6 10:38:20 lolbox dhclient: XMT: Solicit on eth0, interval 2110ms. Jun 6 10:38:20 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:38:20 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:38:20 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:20 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:38:20 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> (eth0): device state change: ip-config -> secondaries (reason 'none') [70 90 0] Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> (eth0): device state change: secondaries -> activated (reason 'none') [90 100 0] Jun 6 10:38:21 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTED_GLOBAL Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Writing DNS information to /sbin/resolvconf Jun 6 10:38:21 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 127.0.0.1#53 Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 193.150.193.150#53 Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 83.255.245.11#53 Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Activation (eth0) successful, device activated. Jun 6 10:38:21 lolbox whoopsie[1133]: message repeated 2 times: [ offline] Jun 6 10:38:21 lolbox whoopsie[1133]: online Jun 6 10:38:21 lolbox ntpdate[13217]: Can't find host ntp.ubuntu.com: Name or service not known (-2) Jun 6 10:38:21 lolbox ntpdate[13217]: no servers can be used, exiting Jun 6 10:38:22 lolbox dnsmasq[1138]: reading /var/run/dnsmasq/resolv.conf Jun 6 10:38:22 lolbox dnsmasq[1138]: using nameserver 127.0.1.1#53 Jun 6 10:38:22 lolbox dhclient: XMT: Solicit on eth0, interval 4080ms. Jun 6 10:38:22 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:22 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:26 lolbox dhclient: XMT: Solicit on eth0, interval 8450ms. Jun 6 10:38:26 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:26 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:35 lolbox dhclient: XMT: Solicit on eth0, interval 16630ms. Jun 6 10:38:35 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:35 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:51 lolbox dhclient: XMT: Solicit on eth0, interval 34860ms. Jun 6 10:38:51 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:51 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:58 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:38:58 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> (eth0): DHCPv6 request timed out. Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13161 Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) scheduled... Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) started... Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): device state change: activated -> failed (reason 'ip-config-unavailable') [100 120 5] Jun 6 10:39:04 lolbox NetworkManager[862]: <info> NetworkManager state is now DISCONNECTED Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> Activation (eth0) failed for connection 'Wired connection 1' Jun 6 10:38:22 lolbox whoopsie[1133]: online Jun 6 10:39:04 lolbox whoopsie[1133]: offline Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) complete. Jun 6 10:39:04 lolbox dbus[485]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): device state change: failed -> disconnected (reason 'none') [120 30 0] Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): deactivating device (reason 'none') [0] Jun 6 10:39:04 lolbox dbus[485]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13160 Jun 6 10:39:04 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:39:04 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:04 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:39:04 lolbox avahi-daemon[619]: Withdrawing address record for 192.168.0.16 on eth0. Jun 6 10:39:04 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:39:04 lolbox avahi-daemon[619]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> DNS: plugin dnsmasq update failed Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Removing DNS information from /sbin/resolvconf Jun 6 10:39:04 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:39:05 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:05 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:39:05 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:39:06 lolbox dnsmasq[1138]: no servers found in /var/run/dnsmasq/resolv.conf, will retry Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Auto-activating connection 'Wired connection 1'. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) starting connection 'Wired connection 1' Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTING Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Jun 6 10:39:07 lolbox NetworkManager[862]: <info> dhclient started with pid 13270 Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv6 transaction (timeout in 45 seconds) Jun 6 10:39:07 lolbox NetworkManager[862]: <info> dhclient started with pid 13271 Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Jun 6 10:39:07 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:39:07 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:07 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:39:07 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:39:07 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:39:07 lolbox dhclient: All rights reserved. Jun 6 10:39:07 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:39:07 lolbox dhclient: Jun 6 10:39:08 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:39:08 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:39:08 lolbox dhclient: All rights reserved. Jun 6 10:39:08 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:39:08 lolbox dhclient: Jun 6 10:39:08 lolbox dhclient: Bound to *:546 Jun 6 10:39:08 lolbox dhclient: Listening on Socket/eth0 Jun 6 10:39:08 lolbox dhclient: Sending on Socket/eth0 Jun 6 10:39:08 lolbox kernel: [ 1446.098590] type=1400 audit(1402043948.002:75): apparmor="DENIED" operation="signal" profile="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=13273 comm="nm-dhcp-client." requested_mask="send" denied_mask="send" signal=term peer="/sbin/dhclient" Jun 6 10:39:08 lolbox kernel: [ 1446.098599] type=1400 audit(1402043948.002:76): apparmor="DENIED" operation="signal" profile="/sbin/dhclient" pid=13273 comm="nm-dhcp-client." requested_mask="receive" denied_mask="receive" signal=term peer="/usr/lib/NetworkManager/nm-dhcp-client.action" Jun 6 10:39:08 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed nbi -> preinit Jun 6 10:39:08 lolbox dhclient: Listening on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:39:08 lolbox dhclient: Sending on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:39:08 lolbox dhclient: Sending on Socket/fallback Jun 6 10:39:08 lolbox dhclient: DHCPREQUEST of 192.168.0.16 on eth0 to 255.255.255.255 port 67 (xid=0x3e0183b9) Jun 6 10:39:08 lolbox dhclient: XMT: Solicit on eth0, interval 1050ms. Jun 6 10:39:08 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:39:08 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:39:09 lolbox dhclient: DHCPACK of 192.168.0.16 from 192.168.0.1 Jun 6 10:39:09 lolbox dhclient: bound to 192.168.0.16 -- renewal in 35498 seconds. Jun 6 10:39:09 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed preinit -> reboot Jun 6 10:39:09 lolbox NetworkManager[862]: <info> address 192.168.0.16 Jun 6 10:39:09 lolbox NetworkManager[862]: <info> prefix 24 (255.255.255.0) Jun 6 10:39:09 lolbox NetworkManager[862]: <info> gateway 192.168.0.1 Jun 6 10:39:09 lolbox NetworkManager[862]: <info> nameserver '83.255.245.11' Jun 6 10:39:09 lolbox NetworkManager[862]: <info> nameserver '193.150.193.150' Jun 6 10:39:09 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Jun 6 10:39:09 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Jun 6 10:39:09 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:39:09 lolbox avahi-daemon[619]: New relevant interface eth0.IPv4 for mDNS. Jun 6 10:39:09 lolbox avahi-daemon[619]: Registering new address record for 192.168.0.16 on eth0.IPv4. Jun 6 10:39:09 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:09 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:39:09 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:39:10 lolbox dhclient: XMT: Solicit on eth0, interval 2180ms. Jun 6 10:39:10 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:10 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> (eth0): device state change: ip-config -> secondaries (reason 'none') [70 90 0] Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> (eth0): device state change: secondaries -> activated (reason 'none') [90 100 0] Jun 6 10:39:10 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTED_GLOBAL Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Writing DNS information to /sbin/resolvconf Jun 6 10:39:10 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 127.0.0.1#53 Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 193.150.193.150#53 Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 83.255.245.11#53 Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Activation (eth0) successful, device activated. Jun 6 10:39:10 lolbox whoopsie[1133]: message repeated 2 times: [ offline] Jun 6 10:39:10 lolbox whoopsie[1133]: online Jun 6 10:39:10 lolbox ntpdate[13339]: Can't find host ntp.ubuntu.com: Name or service not known (-2) Jun 6 10:39:10 lolbox ntpdate[13339]: no servers can be used, exiting Jun 6 10:39:11 lolbox dnsmasq[1138]: reading /var/run/dnsmasq/resolv.conf Jun 6 10:39:11 lolbox dnsmasq[1138]: using nameserver 127.0.1.1#53 Jun 6 10:39:12 lolbox dhclient: XMT: Solicit on eth0, interval 4350ms. Jun 6 10:39:12 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:12 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:16 lolbox dhclient: XMT: Solicit on eth0, interval 8740ms. Jun 6 10:39:16 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:16 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:17 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:17 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:25 lolbox dhclient: XMT: Solicit on eth0, interval 17610ms. Jun 6 10:39:25 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:25 lolbox dhclient: IA_NA status code NoAddrsAvail.

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • Visual Studio 2010 SP1

    - by ScottGu
    Last week we shipped Service Pack 1 of Visual Studio 2010 and the Visual Studio Express Tools.  In addition to bug fixes and performance improvements, SP1 includes a number of feature enhancements.  This includes improved local help support, IntelliTrace support for 64-bit applications and SharePoint, built-in Silverlight 4 Tooling support in the box, unit testing support when targeting .NET 3.5, a new performance wizard for Silverlight, IIS Express and SQL CE Tooling support for web projects, HTML5 Intellisense for ASP.NET, and more.  TFS 2010 SP1 was also released last week, together with a new TFS Project Server Integration Pack and Load Test Feature Pack.  Brian Harry has a good blog post about the TFS updates here. VS 2010 SP1 Download Click here to download and install SP1 for all versions of Visual Studio (including express).  This installer examines what you have installed on your machine, and only downloads the servicing downloads necessary to update them to SP1.  The time it takes to download and update will consequently depend on what all you have installed.  Jon Galloway has a good blog post on tips to speed up the SP1 install by uninstalling unused components. Web Platform Installer Bundles In addition to the core VS 2010 SP1 installer, we have also put together two Web Platform Installer (WebPI) bundles that automate installing SP1 together with additional web-specific components: VS 2010 SP1 WebPI Bundle Visual Web Developer 2010 SP1 WebPI Bundle The above WebPI bundles automate installing: VS 2010/VWD 2010 SP1 ASP.NET MVC 3 (runtime + tools support) IIS 7.5 Express SQL Server Compact Edition 4.0 (runtime + tools support) Web Deployment 2.0 Only the components that are not already installed on your machine will be downloaded when you use the above WebPI bundles.  This means that you can run the WebPI bundle at any time (even if you have already installed SP1 or ASP.NET MVC 3) and not have to worry about wasting time downloading/installing these components again. Earlier this year I did two posts that discussed how to use IIS Express and SQL CE with ASP.NET projects in SP1.  Read the below posts to learn more about how to use them after you run the above bundles: Visual Studio 2010 SP1 and IIS Express Visual Studio 2010 SP1 and SQL CE for ASP.NET The above feature additions work with any web project type – including both ASP.NET Web Forms and ASP.NET MVC. Additional SP1 Notes Two additional notes about VS 2010 SP1: 1) One change we made between RTM and SP1 is that by default Visual Studio now uses software rendering instead of hardware acceleration when running on Windows XP.  We made this change because we’ve seen reports of (often inconsistent) performance issues caused by older video drivers.  Running in software mode eliminates these and delivers consistent speeds.  You can optionally re-enable hardware acceleration with SP1 using Visual Studio’s Tools->Options menu command – we did not remove support for HW acceleration on XP, we simply changed the default setting for it.  Jason Zander has written more details on the change and how to re-enable HW acceleration inside VS here. 2) We have discovered an issue where installing SP1 can cause TSQL intellisense within SQL Server Management Studio 2008 R2 to stop working (typing still works – but intellisense doesn’t show up).  The SQL team is investigating this now and I’ll post an update on how to fix this once more details are known.  Hope this helps, Scott P.S. I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • A temporary disagreement

    - by Tony Davis
    Last month, Phil Factor caused a furore amongst some MVPs with an article that attempted to offer simple advice to developers regarding the use of table variables, versus local and global temporary tables, in their code. Phil makes clear that the table variables do come with some fairly major limitations.no distribution statistics, no parallel query plans for queries that modify table variables.but goes on to suggest that for reasonably small-scale strategic uses, and with a bit of due care and testing, table variables are a "good thing". Not everyone shares his opinion; in fact, I imagine he was rather aghast to learn that there were those felt his article was akin to pulling the pin out of a grenade and tossing it into the database; table variables should be avoided in almost all cases, according to their advice, in favour of temp tables. In other words, a fairly major feature of SQL Server should be more-or-less 'off limits' to developers. The problem with temp tables is that, because they are scoped either in the procedure or the connection, it is easy to allow them to hang around for too long, eating up precious memory and bulking up the shared tempdb database. Unless they are explicitly dropped, global temporary tables, and local temporary tables created within a connection rather than within a stored procedure, will persist until the connection is closed or, with connection pooling, until the connection is reused. It's also quite common with ASP.NET applications to have connection leaks, as Bill Vaughn explains in his chapter in the "SQL Server Deep Dives" book, meaning that the web page exits without closing the connection object, maybe due to an error condition. This will then hang around in the heap for what might be hours before picked up by the garbage collector. Table variables are much safer in this regard, since they are batch-scoped and so are cleaned up automatically once the batch is complete, which also means that they are intuitive to use for the developer because they conform to scoping rules that are closer to those in procedural code. On the surface then, an ideal way to deal with issues related to tempdb memory hogging. So why did Phil qualify his recommendation to use Table Variables? This is another of those cases where, like scalar UDFs and table-valued multi-statement UDFs, developers can sometimes get into trouble with a relatively benign-looking feature, due to way it's been implemented in SQL Server. Once again the biggest problem is how they are handled internally, by the SQL Server query optimizer, which can make very poor choices for JOIN orders and so on, in the absence of statistics, especially when joining to tables with highly-skewed data. The resulting execution plans can be horrible, as will be the resulting performance. If the JOIN is to a large table, that will hurt. Ideally, Microsoft would simply fix this issue so that developers can't get burned in this way; they've been around since SQL Server 2000, so Microsoft has had a bit of time to get it right. As I commented in regard to UDFs, when developers discover issues like with such standard features, the database becomes an alien planet to them, where death lurks around each corner, and they continue to avoid these "killer" features years after the problems have been eventually resolved. In the meantime, what is the right approach? Is it to say "hammers can kill, don't ever use hammers", or is it to try to explain, as Phil's article and follow-up blog post have tried to do, what the feature was intended for, why care must be applied in its use, and so enable developers to make properly-informed decisions, without requiring them to delve deep into the inner workings of SQL Server? Cheers, Tony.

    Read the article

  • Oracle Big Data Software Downloads

    - by Mike.Hallett(at)Oracle-BI&EPM
    Companies have been making business decisions for decades based on transactional data stored in relational databases. Beyond that critical data, is a potential treasure trove of less structured data: weblogs, social media, email, sensors, and photographs that can be mined for useful information. Oracle offers a broad integrated portfolio of products to help you acquire and organize these diverse data sources and analyze them alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data Connectors Downloads here, includes: Oracle SQL Connector for Hadoop Distributed File System Release 2.1.0 Oracle Loader for Hadoop Release 2.1.0 Oracle Data Integrator Companion 11g Oracle R Connector for Hadoop v 2.1 Oracle Big Data Documentation The Oracle Big Data solution offers an integrated portfolio of products to help you organize and analyze your diverse data sources alongside your existing data to find new insights and capitalize on hidden relationships. Oracle Big Data, Release 2.2.0 - E41604_01 zip (27.4 MB) Integrated Software and Big Data Connectors User's Guide HTML PDF Oracle Data Integrator (ODI) Application Adapter for Hadoop Apache Hadoop is designed to handle and process data that is typically from data sources that are non-relational and data volumes that are beyond what is handled by relational databases. Typical processing in Hadoop includes data validation and transformations that are programmed as MapReduce jobs. Designing and implementing a MapReduce job usually requires expert programming knowledge. However, when you use Oracle Data Integrator with the Application Adapter for Hadoop, you do not need to write MapReduce jobs. Oracle Data Integrator uses Hive and the Hive Query Language (HiveQL), a SQL-like language for implementing MapReduce jobs. Employing familiar and easy-to-use tools and pre-configured knowledge modules (KMs), the application adapter provides the following capabilities: Loading data into Hadoop from the local file system and HDFS Performing validation and transformation of data within Hadoop Loading processed data from Hadoop to an Oracle database for further processing and generating reports Oracle Database Loader for Hadoop Oracle Loader for Hadoop is an efficient and high-performance loader for fast movement of data from a Hadoop cluster into a table in an Oracle database. It pre-partitions the data if necessary and transforms it into a database-ready format. Oracle Loader for Hadoop is a Java MapReduce application that balances the data across reducers to help maximize performance. Oracle R Connector for Hadoop Oracle R Connector for Hadoop is a collection of R packages that provide: Interfaces to work with Hive tables, the Apache Hadoop compute infrastructure, the local R environment, and Oracle database tables Predictive analytic techniques, written in R or Java as Hadoop MapReduce jobs, that can be applied to data in HDFS files You install and load this package as you would any other R package. Using simple R functions, you can perform tasks such as: Access and transform HDFS data using a Hive-enabled transparency layer Use the R language for writing mappers and reducers Copy data between R memory, the local file system, HDFS, Hive, and Oracle databases Schedule R programs to execute as Hadoop MapReduce jobs and return the results to any of those locations Oracle SQL Connector for Hadoop Distributed File System Using Oracle SQL Connector for HDFS, you can use an Oracle Database to access and analyze data residing in Hadoop in these formats: Data Pump files in HDFS Delimited text files in HDFS Hive tables For other file formats, such as JSON files, you can stage the input in Hive tables before using Oracle SQL Connector for HDFS. Oracle SQL Connector for HDFS uses external tables to provide Oracle Database with read access to Hive tables, and to delimited text files and Data Pump files in HDFS. Related Documentation Cloudera's Distribution Including Apache Hadoop Library HTML Oracle R Enterprise HTML Oracle NoSQL Database HTML Recent Blog Posts Big Data Appliance vs. DIY Price Comparison Big Data: Architecture Overview Big Data: Achieve the Impossible in Real-Time Big Data: Vertical Behavioral Analytics Big Data: In-Memory MapReduce Flume and Hive for Log Analytics Building Workflows in Oozie

    Read the article

  • Silverlight Cream for December 13, 2010 -- #1010

    - by Dave Campbell
    In this Issue: Rénald Nollet, Benjamin Gavin, Dennis Doomen, Tim Greenfield, Mike Taulty, Jeff Blankenburg, Michael Crump, Laurent Duveau, Dragos Manolescu, KeyboardP, Yochay Kiriaty. Above the Fold: Silverlight: "Silverlight RIA Services and Basic, Anonymous Authentication" Benjamin Gavin WP7: "lving Circular Navigation in Windows Phone Silverlight Applications" Yochay Kiriaty SQL Azure: "SQL Azure Database Manager – Part 1 : How to connect to your SQL Azure DB" Rénald Nollet Shoutouts: Yochay Kiriaty has a post up on the Windows Phone Devloper Blog about open source (MSPL) projects helping WP7 devs: Windows Phone Recipes – Helping the Community Jesse Liberty's latest Yet Another Podcast is up and thie time it's Joe Stagner: Yet Another Podcast #18 – Joe Stagner Josh Schwartzberg sent me this link to what is apparently his yearly web-only rock Christmas album: MetalXmas... done in Silverlight and RIA Services From SilverlightCream.com: SQL Azure Database Manager – Part 1 : How to connect to your SQL Azure DB Rénald Nollet posted Part 1 of a series on a SQL Azure database manager all in Silverlight... has a live demo running, some description, and is making us wait for the next part! Silverlight RIA Services and Basic, Anonymous Authentication Benjamin Gavin has a quick post up resolving a basic RIA Services problem that I bet a lot of folks are looking for the answer on... like 500 series errors... cool little find he ferreted out... A night of Silverlight, WPF, unit testing and Caliburn Micro Dennis Doomen in concert with his employer gave a couple talks at the local DotNED user group, and covered literally a cornucopia of topics... slides, and example code for both talks... lotsa material here... Tim Greenfield on PuzzleTouch WP7 Application Tim Greenfield is the latest WP7 app developer to be interviewed by the SilverlightShow crew... lots of interesting comments and insight from Tim. Rebuilding the PDC 2010 Silverlight Application (Part 4) Mike Taulty has part 4 of his PDC 2010 Silverlight app construction project up and is taking the app into Blend, and the considerations that brought to the table. What I Learned In WP7 – Issue #2 Jeff Blankenburg continues his "What I Learned" series with this discussion about fonts, the Non-Linear Navigation service I mention below, and possible WP7 jobs. Part 3 of 4 : Tips/Tricks for Silverlight Developers Michael Crump has Part 3 of his Tips/Tricks up today. Lots of goodies this time: underlining in a TextBlock, getting browser info, startup params, VisualTreeHelper, and child windows. My Windows Phone 7 presentation in Montreal Laurent Duveau gave a WP7 presentation in Montreal as part of the Microsoft Windows Phone 7 Developer's Briefing, and has posted his materials and slide deck WP7 Code: Mocking Event Streams with IEnumerable Dragos Manolescu has a very cool post up on using IEnumerable to Mock event streams by leveraging the IObservable/IEnumerable duality, and uses the 2D bubble app that you can run and test in the emulator without needing an accelerometer Transparent Wallpapers – Video Tutorial KeyboardP has had so many queries about his Transparent wallpaper for WP7 that he produced a video tutorial for it... Solving Circular Navigation in Windows Phone Silverlight Applications Yochay Kiriaty discusses the first recipe they are releasing ... see the shoutout above, a Nonlinear Navigation Service ... to help with apps that have loops in navigation. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

< Previous Page | 832 833 834 835 836 837 838 839 840 841 842 843  | Next Page >