Search Results

Search found 5377 results on 216 pages for 'explicit cast operator'.

Page 197/216 | < Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >

  • linux raid 1: right after replacing and syncing one drive, the other disk fails - understanding what is going on with mdstat/mdadm

    - by devicerandom
    We have an old RAID 1 Linux server (Ubuntu Lucid 10.04), with four partitions. A few days ago /dev/sdb failed, and today we noticed /dev/sda had pre-failure ominous SMART signs (~4000 reallocated sector count). We replaced /dev/sdb this morning and rebuilt the RAID on the new drive, following this guide: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array Everything went smooth until the very end. When it looked like it was finishing to synchronize the last partition, the other old one failed. At this point I am very unsure of the state of the system. Everything seems working and the files seem to be all accessible, just as if it synchronized everything, but I'm new to RAID and I'm worried about what is going on. The /proc/mdstat output is: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sdb4[2](S) sda4[0] 478713792 blocks [2/1] [U_] md2 : active raid1 sdb3[1] sda3[2](F) 244140992 blocks [2/1] [_U] md1 : active raid1 sdb2[1] sda2[2](F) 244140992 blocks [2/1] [_U] md0 : active raid1 sdb1[1] sda1[2](F) 9764800 blocks [2/1] [_U] unused devices: <none> The order of [_U] vs [U_]. Why aren't they consistent along all the array? Is the first U /dev/sda or /dev/sdb? (I tried looking on the web for this trivial information but I found no explicit indication) If I read correctly for md0, [_U] should be /dev/sda1 (down) and /dev/sdb1 (up). But if /dev/sda has failed, how can it be the opposite for md3 ? I understand /dev/sdb4 is now spare because probably it failed to synchronize it 100%, but why does it show /dev/sda4 as up? Shouldn't it be [__]? Or [_U] anyway? The /dev/sda drive now cannot even be accessed by SMART anymore apparently, so I wouldn't expect it to be up. What is wrong with my interpretation of the output? I attach also the outputs of mdadm --detail for the four partitions: /dev/md0: Version : 00.90 Creation Time : Fri Jan 21 18:43:07 2011 Raid Level : raid1 Array Size : 9764800 (9.31 GiB 10.00 GB) Used Dev Size : 9764800 (9.31 GiB 10.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:27:33 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : a3b4dbbd:859bf7f2:bde36644:fcef85e2 Events : 0.7704 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 1 - faulty spare /dev/sda1 /dev/md1: Version : 00.90 Creation Time : Fri Jan 21 18:43:15 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:39:06 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 8bcd5765:90dc93d5:cc70849c:224ced45 Events : 0.1508280 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync /dev/sdb2 2 8 2 - faulty spare /dev/sda2 /dev/md2: Version : 00.90 Creation Time : Fri Jan 21 18:43:19 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:46:44 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 2885668b:881cafed:b8275ae8:16bc7171 Events : 0.2289636 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - faulty spare /dev/sda3 /dev/md3: Version : 00.90 Creation Time : Fri Jan 21 18:43:22 2011 Raid Level : raid1 Array Size : 478713792 (456.54 GiB 490.20 GB) Used Dev Size : 478713792 (456.54 GiB 490.20 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:19:20 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 0 0 1 removed 2 8 20 - spare /dev/sdb4 The active sync on /dev/sda4 baffles me. I am worried because if tomorrow morning I have to replace /dev/sda, I want to be sure what should I sync with what and what is going on. I am also quite baffled by the fact /dev/sda decided to fail exactly when the raid finished resyncing. I'd like to understand what is really happening. Thanks a lot for your patience and help. Massimo

    Read the article

  • How can I implement a database TableView like thing in C++?

    - by Industrial-antidepressant
    How can I implement a TableView like thing in C++? I want to emulating a tiny relation database like thing in C++. I have data tables, and I want to transform it somehow, so I need a TableView like class. I want filtering, sorting, freely add and remove items and transforming (ex. view as UPPERCASE and so on). The whole thing is inside a GUI application, so datatables and views are attached to a GUI (or HTML or something). So how can I identify an item in the view? How can I signal it when the table is changed? Is there some design pattern for this? Here is a simple table, and a simple data item: #include <string> #include <boost/multi_index_container.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/random_access_index.hpp> using boost::multi_index_container; using namespace boost::multi_index; struct Data { Data() {} int id; std::string name; }; struct row{}; struct id{}; struct name{}; typedef boost::multi_index_container< Data, indexed_by< random_access<tag<row> >, ordered_unique<tag<id>, member<Data, int, &Data::id> >, ordered_unique<tag<name>, member<Data, std::string, &Data::name> > > > TDataTable; class DataTable { public: typedef Data item_type; typedef TDataTable::value_type value_type; typedef TDataTable::const_reference const_reference; typedef TDataTable::index<row>::type TRowIndex; typedef TDataTable::index<id>::type TIdIndex; typedef TDataTable::index<name>::type TNameIndex; typedef TRowIndex::iterator iterator; DataTable() : row_index(rule_table.get<row>()), id_index(rule_table.get<id>()), name_index(rule_table.get<name>()), row_index_writeable(rule_table.get<row>()) { } TDataTable::const_reference operator[](TDataTable::size_type n) const { return rule_table[n]; } std::pair<iterator,bool> push_back(const value_type& x) { return row_index_writeable.push_back(x); } iterator erase(iterator position) { return row_index_writeable.erase(position); } bool replace(iterator position,const value_type& x) { return row_index_writeable.replace(position, x); } template<typename InputIterator> void rearrange(InputIterator first) { return row_index_writeable.rearrange(first); } void print_table() const; unsigned size() const { return row_index.size(); } TDataTable rule_table; const TRowIndex& row_index; const TIdIndex& id_index; const TNameIndex& name_index; private: TRowIndex& row_index_writeable; }; class DataTableView { DataTableView(const DataTable& source_table) {} // How can I implement this? // I want filtering, sorting, signaling upper GUI layer, and sorting, and ... }; int main() { Data data1; data1.id = 1; data1.name = "name1"; Data data2; data2.id = 2; data2.name = "name2"; DataTable table; table.push_back(data1); DataTable::iterator it1 = table.row_index.iterator_to(table[0]); table.erase(it1); table.push_back(data1); Data new_data(table[0]); new_data.name = "new_name"; table.replace(table.row_index.iterator_to(table[0]), new_data); for (unsigned i = 0; i < table.size(); ++i) std::cout << table[i].name << std::endl; #if 0 // using scenarios: DataTableView table_view(table); table_view.fill_from_source(); // synchronization with source table_view.remove(data_item1); // remove item from view table_view.add(data_item2); // add item from source table table_view.filter(filterfunc); // filtering table_view.sort(sortfunc); // sorting // modifying from source_able, hot to signal the table_view? // FYI: Table view is atteched to a GUI item table.erase(data); table.replace(data); #endif return 0; }

    Read the article

  • State of the (Commerce) Union: What the healthcare.gov hiccups teach us about the commerce customer experience

    - by Katrina Gosek
    Guest Post by Brenna Johnson, Oracle Commerce Product A lot has been said about the healthcare.gov debacle in the last week. Regardless of your feelings about the Affordable Care Act, there’s a hidden issue in this story that most of the American people don’t understand: delivering a great commerce customer experience (CX) is hard. It shouldn’t be, but it is. The reality of the government’s issues getting the healthcare site up and running smooth is something we in the online commerce community know too well.  If there’s one thing the botched launch of the site has taught us, it’s that regardless of the size of your budget or the power of an executive with a high-profile project, some of the biggest initiatives with the most attention (and the most at stake) don’t go as planned. It may even give you a moment of solace – we have the same issues! But why?  Organizations engage too many separate vendors with different technologies, running sections or pieces of a site to get live. When things go wrong, it takes time to identify the problem – and who or what is at the center of it. Unfortunately, this is a brittle way of setting up a site, making it susceptible to breaks, bugs, and scaling issues. But, it’s the reality of running a site with legacy technology constraints in today’s demanding, customer-centric market. This approach also means there’s also a lot of cooks in lots of different kitchens. You’ve got development and IT, the business and the marketing team, an external Systems Integrator to bring it all together, a digital agency or consultant, QA, product experts, 3rd party suppliers, and the list goes on. To complicate things, different business units are held responsible for different pieces of the site and managing different technologies. And again – due to legacy organizational structure and processes, this is all accepted as the normal State of the Union. Digital commerce has been commonplace for 15 years. Yet, getting a site live, maintained and performing requires orchestrating a cast of thousands (or at least, dozens), big dollars, and some finger-crossing. But it shouldn’t. The great thing about the advent of mobile commerce and the continued maturity of online commerce is that it’s forced organizations to think from the outside, in. Consumers – whether they’re shopping for shoes or a new healthcare plan – don’t care about what technology issues or processes you have behind the scenes. They just want it to work.  They want their experience to be easy, fast, and tailored to them and their needs – whatever they are. This doesn’t sound like a tall order to the American consumer – especially since they interact with sites that do work smoothly.  But the reality is that it takes scores of people, teams, check-ins, late nights, testing, and some good luck to get sites to run, and even more so at Black Friday (or October 1st) traffic levels.  The last thing on a customer’s mind is making excuses for why they can’t buy a product – just get it to work. So what is the government doing? My guess is working day and night to get the site performing  - and having to throw big money at the problem. In the meantime they’re sending frustrated online users to the call center, or even a location where a trained “navigator” can help them in-person to complete their selection. Sounds a lot like multichannel commerce (where broken communication between siloed touchpoints will only frustrate the consumer more). One thing we’ve learned is that consumers spend their time and money with brands they know and trust. When sites are easy to use and adapt to their needs, they tend to spend more, come back, and even become long-time loyalists. Achieving this may require moving internal mountains, but there’s too much at stake to ignore the sea change in how organizations are thinking about their customer. If the thought of re-thinking your internal teams, technologies, and processes sounds like a headache, think about the pain associated with losing valuable customers – and dollars. Regardless if you’re in B2B or B2C, it’s guaranteed that your competitors are making CX a priority. Those early to the game who have made CX a priority have already begun to outpace their competition. So as you’re planning for 2014, look to the news this week. Make sure the customer experience is a focus at your organization. Expectations are at record highs. Map your customer’s journey, and think from the outside, in. How easy is it for your customers to do business with you? If they interact with many touchpoints across your organization, are the call center, website, mobile environment, or brick and mortar location in sync? Do you have the technology in place to achieve this? It’s time to give the people what they want!

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part II)

    I would now like to expand a little on what I stumbled through in part I of my Visual Studio 2010 post and touch on a few other features of VS 2010.  Specifically, I want to generate some code based off of an Entity Framework model and tie it up to an actual data source.  Im not going to take the easy way and tie to a SQL Server data source, though, I will tie it to an XML data file instead.  Why?  Well, why not?  This is purely for learning, there are probably much better ways to get strongly-typed classes around XML but it will force us to go down a path less travelled and maybe learn a few things along the way.  Once we get this XML data and the means to interact with it, I will revisit data binding to this data in a WPF form and see if I cant get reading, adding, deleting, and updating working smoothly with minimal code.  To begin, I will use what was learned in the first part of this blog topic and draw out a data model for the MFL (My Football League) - I dont want the NFL to come down and sue me for using their name in this totally football-related article.  The data model looks as follows, with Teams having Players, and Players having a position and statistics for each season they played: Note that when making the associations between these entities, I was given the option to create the foreign key but I only chose to select this option for the association between Player and Position.  The reason for this is that I am picturing the XML that will contain this data to look somewhat like this: <MFL> <Position/> <Position/> <Position/> <Team>     <Player>         <Statistic/>     </Player> </Team> </MFL> Statistic will be under its associated Player node, and Player will be under its associated Team node no need to have an Id to reference it if we know it will always fall under its parent.  Position, however, is more of a lookup value that will not have any hierarchical relationship to the player.  In fact, the Position data itself may be in a completely different xml file (something Id like to play around with), so in any case, a player will need to reference the position by its Id. So now that we have a simple data model laid out, I would like to generate two things based on it:  A class for each entity with properties corresponding to each entity property An IO class with methods to get data for each entity, either all instances, by Id or by parent. Now my experience with code generation in the past has consisted of writing up little apps that use the code dom directly to regenerate code on demand (or using tools like CodeSmith).  Surely, there has got to be a more fun way to do this given that we are using the Entity Framework which already has built-in code generation for SQL Server support.  Lets start with that built-in stuff to give us a base to work off of.  Right click anywhere in the canvas of our model and select Add Code Generation Item: So just adding that code item seemed to do quite a bit towards what I was intending: It apparently generated a class for each entity, but also a whole ton more.  I mean a TON more.  Way too much complicated code was generated now that code is likely to be a black box anyway so it shouldnt matter, but we need to understand how to make this work the way we want it to work, so lets get ready to do some stumbling through that text template (tt) file. When I open the .tt file that was generated, right off the bat I realize there is going to be trouble there is no color coding, no intellisense no nothing!  That is going to make stumbling through more like groping blindly in the dark while handcuffed and hopping on one foot, which was one of the alternate titles I was considering for this blog.  Thankfully, the community comes to my rescue and I wont have to cast my mind back to the glory days of coding in VI (look it up, kids).  Using the Extension Manager (Available under the Tools menu), I did a quick search for tt editor in the Online Gallery and quickly found the Tangible T4 Editor: Downloading and installing this was a breeze, and after doing so I got some color coding and intellisense while editing the tt files.  If you will be doing any customizing of tt files, I highly recommend installing this extension.  Next, well see if that is enough help for us to tweak that tt file to do the kind of code generation that we wantDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Scripting Windows Shares - VBS

    - by Calvin Piche
    So i am totally new to VBS, never used it. I am trying to create multiple shares and i found a Microsoft VBS script that can do this(http://gallery.technet.microsoft.com/scriptcenter/6309d93b-fcc3-4586-b102-a71415244712) My question is, this script only allows for one domain group or user to be added for permissions where i am needing to add a couple with different permissions(got that figured out) Below is the script that i have modified for my needs but just need to add in the second group with the other permissions. If there is an easier way to do this please let me know. 'ShareSetup.vbs '========================================================================== Option Explicit Const FILE_SHARE = 0 Const MAXIMUM_CONNECTIONS = 25 Dim strComputer Dim objWMIService Dim objNewShare strComputer = "." Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2") Set objNewShare = objWMIService.Get("Win32_Share") Call sharesec ("C:\Published Apps\Logs01", "Logs01", "Log01", "Support") Call sharesec2 ("C:\Published Apps\Logs01", "Logs01", "Log01", "Domain Admins") Sub sharesec(Fname,shr,info,account) 'Fname = Folder path, shr = Share name, info = Share Description, account = account or group you are assigning share permissions to Dim FSO Dim Services Dim SecDescClass Dim SecDesc Dim Trustee Dim ACE Dim Share Dim InParam Dim Network Dim FolderName Dim AdminServer Dim ShareName FolderName = Fname AdminServer = "\\" & strComputer ShareName = shr Set Services = GetObject("WINMGMTS:{impersonationLevel=impersonate,(Security)}!" & AdminServer & "\ROOT\CIMV2") Set SecDescClass = Services.Get("Win32_SecurityDescriptor") Set SecDesc = SecDescClass.SpawnInstance_() 'Set Trustee = Services.Get("Win32_Trustee").SpawnInstance_ 'Trustee.Domain = Null 'Trustee.Name = "EVERYONE" 'Trustee.Properties_.Item("SID") = Array(1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0) Set Trustee = SetGroupTrustee("domain", account) 'Replace ACME with your domain name. 'To assign permissions to individual accounts use SetAccountTrustee rather than SetGroupTrustee Set ACE = Services.Get("Win32_Ace").SpawnInstance_ ACE.Properties_.Item("AccessMask") = 1179817 ACE.Properties_.Item("AceFlags") = 3 ACE.Properties_.Item("AceType") = 0 ACE.Properties_.Item("Trustee") = Trustee SecDesc.Properties_.Item("DACL") = Array(ACE) Set Share = Services.Get("Win32_Share") Set InParam = Share.Methods_("Create").InParameters.SpawnInstance_() InParam.Properties_.Item("Access") = SecDesc InParam.Properties_.Item("Description") = "Public Share" InParam.Properties_.Item("Name") = ShareName InParam.Properties_.Item("Path") = FolderName InParam.Properties_.Item("Type") = 0 Share.ExecMethod_ "Create", InParam End Sub Sub sharesec2(Fname,shr,info,account) 'Fname = Folder path, shr = Share name, info = Share Description, account = account or group you are assigning share permissions to Dim FSO Dim Services Dim SecDescClass Dim SecDesc Dim Trustee Dim ACE2 Dim Share Dim InParam Dim Network Dim FolderName Dim AdminServer Dim ShareName FolderName = Fname AdminServer = "\\" & strComputer ShareName = shr Set Services = GetObject("WINMGMTS:{impersonationLevel=impersonate,(Security)}!" & AdminServer & "\ROOT\CIMV2") Set SecDescClass = Services.Get("Win32_SecurityDescriptor") Set SecDesc = SecDescClass.SpawnInstance_() 'Set Trustee = Services.Get("Win32_Trustee").SpawnInstance_ 'Trustee.Domain = Null 'Trustee.Name = "EVERYONE" 'Trustee.Properties_.Item("SID") = Array(1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0) Set Trustee = SetGroupTrustee("domain", account) 'Replace ACME with your domain name. 'To assign permissions to individual accounts use SetAccountTrustee rather than SetGroupTrustee Set ACE2 = Services.Get("Win32_Ace").SpawnInstance_ ACE2.Properties_.Item("AccessMask") = 1179817 ACE2.Properties_.Item("AceFlags") = 3 ACE2.Properties_.Item("AceType") = 0 ACE2.Properties_.Item("Trustee") = Trustee SecDesc.Properties_.Item("DACL") = Array(ACE2) End Sub Function SetAccountTrustee(strDomain, strName) set objTrustee = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_Trustee").Spawninstance_ set account = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_Account.Name='" & strName & "',Domain='" & strDomain &"'") set accountSID = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_SID.SID='" & account.SID &"'") objTrustee.Domain = strDomain objTrustee.Name = strName objTrustee.Properties_.item("SID") = accountSID.BinaryRepresentation set accountSID = nothing set account = nothing set SetAccountTrustee = objTrustee End Function Function SetGroupTrustee(strDomain, strName) Dim objTrustee Dim account Dim accountSID set objTrustee = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_Trustee").Spawninstance_ set account = getObject("Winmgmts:{impersonationlevel=impersonate}!root/cimv2:Win32_Group.Name='" & strName & "',Domain='" & strDomain &"'") set accountSID = getObject("Winmgmts: {impersonationlevel=impersonate}!root/cimv2:Win32_SID.SID='" & account.SID &"'") objTrustee.Domain = strDomain objTrustee.Name = strName objTrustee.Properties_.item("SID") = accountSID.BinaryRepresentation set accountSID = nothing set account = nothing set SetGroupTrustee = objTrustee End Function

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • Combine config-paramters with parameters passed from commanline

    - by Frederik
    I have created a SSIS-package that imports a file into a table (simple enough). I have some variables, a few set in a config-file such as server, database, importfolder. at runtime I want to pass the filename. This is done through a stored procedure using dtexec. When setting the paramters throught the configfile it works fine also when setting all parameters in the procedure and passing them with the \Set statement (se below). when I try to combine the config-version with settings parameters on the fly I get an error refering to the config-files path that was set at design time. Has anybody come across this and found a solution for it? Regards Frederik DECLARE @SSISSTR VARCHAR(8000), @DataBaseServer VARCHAR(100), @DataBaseName VARCHAR(100), @PackageFilePath VARCHAR(200), @ImportFolder VARCHAR(200), @HandledFolder VARCHAR(200), @ConfigFilePath VARCHAR(200), @SSISreturncode INT; /* DEBUGGING DECLARE @FileName VARCHAR(100), @SelectionId INT SET @FileName = 'Test.csv'; SET @SelectionId = 366; */ SET @PackageFilePath = '/FILE "Y:\SSIS\Packages\PostalCodeSelectionImport\ImportPackage.dtsx" '; SET @DataBaseServer = 'STOSWVUTVDB01\DEV_BSE'; SET @DataBaseName = 'BSE_ODR'; SET @ImportFolder = '\\Stoswvutvbse01\Application\FileLoadArea\ODR\\'; SET @HandledFolder = '\\Stoswvutvbse01\Application\FileLoadArea\ODR\Handled\\'; --SET @ConfigFilePath = '/CONFIGFILE "Y:\SSIS\Packages\PostalCodeSelectionImport\Configuration\DEV_BSE.dtsConfig" '; ----now making "dtexec" SQL from dynamic values SET @SSISSTR = 'DTEXEC ' + @PackageFilePath; -- + @ConfigFilePath; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::SelectionId].Properties[Value];' + CAST( @SelectionId AS VARCHAR(12)); SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::DataBaseServer].Properties[Value];"' + @DataBaseServer + '"'; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::ImportFolder].Properties[Value];"' + @ImportFolder + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::DataBaseName].Properties[Value];"' + @DataBaseName + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::ImportFileName].Properties[Value];"' + @FileName + '" '; SET @SSISSTR = @SSISSTR + ' /SET \Package.Variables[User::HandledFolder].Properties[Value];"' + @HandledFolder + '" '; -- Now execute dynamic SQL by using EXEC. EXEC @SSISreturncode = xp_cmdshell @SSISSTR;

    Read the article

  • "No message serializer has been configured" error when starting NServiceBus endpoint

    - by SteveBering
    My GenericHost hosted service is failing to start with the following message: 2010-05-07 09:13:47,406 [1] FATAL NServiceBus.Host.Internal.GenericHost [(null)] <(null) - System.InvalidOperationException: No message serializer has been con figured. at NServiceBus.Unicast.Transport.Msmq.MsmqTransport.CheckConfiguration() in d:\BuildAgent-02\work\672d81652eaca4e1\src\impl\unicast\NServiceBus.Unicast.Msmq\ MsmqTransport.cs:line 241 at NServiceBus.Unicast.Transport.Msmq.MsmqTransport.Start() in d:\BuildAgent-02\work\672d81652eaca4e1\src\impl\unicast\NServiceBus.Unicast.Msmq\MsmqTransport .cs:line 211 at NServiceBus.Unicast.UnicastBus.NServiceBus.IStartableBus.Start(Action startupAction) in d:\BuildAgent-02\work\672d81652eaca4e1\src\unicast\NServiceBus.Uni cast\UnicastBus.cs:line 694 at NServiceBus.Unicast.UnicastBus.NServiceBus.IStartableBus.Start() in d:\BuildAgent-02\work\672d81652eaca4e1\src\unicast\NServiceBus.Unicast\UnicastBus.cs:l ine 665 at NServiceBus.Host.Internal.GenericHost.Start() in d:\BuildAgent-02\work\672d81652eaca4e1\src\host\NServiceBus.Host\Internal\GenericHost.cs:line 77 My endpoint configuration looks like: public class ServiceEndpointConfiguration : IConfigureThisEndpoint, AsA_Publisher, IWantCustomInitialization { public void Init() { // build out persistence infrastructure var sessionFactory = Bootstrapper.InitializePersistence(); // configure NServiceBus infrastructure var container = Bootstrapper.BuildDependencies(sessionFactory); // set up logging log4net.Config.XmlConfigurator.Configure(); Configure.With() .Log4Net() .UnityBuilder(container) .XmlSerializer(); } } And my app.config looks like: <configSections> <section name="MsmqTransportConfig" type="NServiceBus.Config.MsmqTransportConfig, NServiceBus.Core" /> <section name="UnicastBusConfig" type="NServiceBus.Config.UnicastBusConfig, NServiceBus.Core" /> <section name="Logging" type="NServiceBus.Config.Logging, NServiceBus.Core" /> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" requirePermission="false" /> </configSections> <Logging Threshold="DEBUG" /> <MsmqTransportConfig InputQueue="NServiceBus.ServiceInput" ErrorQueue="NServiceBus.Errors" NumberOfWorkerThreads="1" MaxRetries="2" /> <UnicastBusConfig DistributorControlAddress="" DistributorDataAddress="" ForwardReceivedMessagesTo="NServiceBus.Auditing"> <MessageEndpointMappings> <!-- publishers don't need to set this for their own message types --> </MessageEndpointMappings> </UnicastBusConfig> <connectionStrings> <add name="Db" connectionString="Data Source=..." providerName="System.Data.SqlClient" /> </connectionStrings> <log4net debug="true"> <root> <level value="INFO"/> </root> <logger name="NHibernate"> <level value="ERROR" /> </logger> </log4net> This has worked in the past, but seems to be failing when the generic host starts. My endpoint configuration is below, along with the app.config for the service. What is strange is that in my endpoint configuration, I am specifying to use the XmlSerializer for message serialization. I don't see any other errors in the console output preceding the error message. What am I missing? Thanks, Steve

    Read the article

  • Nhibernate 2.1 and mysql 5 - InvalidCastException on Setup

    - by Nash
    Hello there, I am trying to use NHibernate with Spring.Net und mySQL 5. However, when setting up the connection and creating the SessionFactoryObject, I get this InvalidCastException: NHibernate seems to cast MySql.Data.MySqlClient.MySqlConnection to System.Data.Common.DbConnection which causes the exception. System.InvalidCastException wurde nicht behandelt. Message="Das Objekt des Typs \"MySql.Data.MySqlClient.MySqlConnection\" kann nicht in Typ \"System.Data.Common.DbConnection\" umgewandelt werden." Source="NHibernate" StackTrace: bei NHibernate.Tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.Prepare() in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SuppliedConnectionProviderConnectionHelper.cs:Zeile 25. bei NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.GetReservedWords(Dialect dialect, IConnectionHelper connectionHelper) in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SchemaMetadataUpdater.cs:Zeile 43. bei NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.Update(ISessionFactory sessionFactory) in c:\CSharp\NH\nhibernate\src\NHibernate\Tool\hbm2ddl\SchemaMetadataUpdater.cs:Zeile 17. bei NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) in c:\CSharp\NH\nhibernate\src\NHibernate\Impl\SessionFactoryImpl.cs:Zeile 169. bei NHibernate.Cfg.Configuration.BuildSessionFactory() in c:\CSharp\NH\nhibernate\src\NHibernate\Cfg\Configuration.cs:Zeile 1090. bei OrmTest.Program.Main(String[] args) in C:\Users\Max\Documents\Visual Studio 2008\Projects\OrmTest\OrmTest\Program.cs:Zeile 24. bei System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) bei System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) bei Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() bei System.Threading.ThreadHelper.ThreadStart_Context(Object state) bei System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) bei System.Threading.ThreadHelper.ThreadStart() InnerException: I am using the programmatically setup approach in order to get a quick NHibernate Setup. Here is the setup Code: Configuration config = new Configuration(); Dictionary props = new Dictionary(); props.Add("dialect", "NHibernate.Dialect.MySQL5Dialect"); props.Add("connection.provider", "NHibernate.Connection.DriverConnectionProvider"); props.Add("connection.driver_class", "NHibernate.Driver.MySqlDataDriver"); props.Add("connection.connection_string", "Server=localhost;Database=orm_test;User ID=root;Password=password"); props.Add("proxyfactory.factory_class", "NHibernate.ByteCode.Spring.ProxyFactoryFactory, NHibernate.ByteCode.Spring"); config.AddProperties(props); config.AddFile("Person.hbm.xml"); ISessionFactory factory = config.BuildSessionFactory(); ISession session = factory.OpenSession(); Is something missing? I downloaded the current mysql Connector from the mysql Website.

    Read the article

  • problem with NHibernate and iSeries DB2

    - by chrisjlong
    Ok So I have an AS400/iSeries running v5r4. I have an application that was using classic NHibernate to connect and do some basic crud. Now I have pulled that app (which sat for 2 years) off the shelf of TFS and onto a new PC and cannot seem to get it running. Here is my Hibernate Config: <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="connection.provider"> NHibernate.Connection.DriverConnectionProvider </property> <property name="dialect"> NHibernate.Dialect.DB2400Dialect </property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu</property> <property name="connection.connection_string"> DataSource=207.206.106.19; Database=AS400; userID=XXXXXX; Password=XXXXXXX; LibraryList=FMSFILTST,BEFFILT,HRDBFT,HRCSTFT,J20##X2DEV,GLCUSTDEV,OSL@@F3DEV; Naming=System; Initial Catalog=*SYSBAS; </property> <property name="use_outer_join">true</property> <property name="query.substitutions"> true 1, false 0, yes 'Y', no 'N' </property> <property name="show_sql">false</property> <mapping assembly="BusinessLogic" /> </session-factory> </hibernate-configuration> I have all the proper DLL's included (NHibernate, castle, iesi, antlr3 , log4 etc). Also have this line in my web.config <runtime> <assemblyBinding> <qualifyAssembly partialName="IBM.Data.DB2.iSeries" fullName="IBM.Data.DB2.iSeries,Version=10.0.0.0,PublicKeyToken=9CDB2EBFB1F93A26,Culture=neutral"/> </assemblyBinding> </runtime> Yet I am still getting the following error as soon as I call NHibernate.Cfg.Configuration().Configure().BuildSessionFactory().OpenSession(); The error is as follows Unable to cast object of type 'IBM.Data.DB2.iSeries.iDB2Connection' to type 'System.Data.Common.DbCommand' I am dying to get some help with this. Any assistance is appreciated. Thanks!

    Read the article

  • InteropServices COMException when executing a .net app from a web CGI script on Windows Server 2003

    - by Kurt W. Leucht
    Disclaimer: I'm completely clueless about .net and COM. I have a vendor's application that appears to be written in .net and I'm trying to wrap it with a web form (a cgi-bin Perl script) so I can eventually launch this vendor's app from a separate computer. I'm on a Windows Server 2003 R2 SE SP1 system and I'm using Apache 2.2 for the web server and ActivePerl 5.10.0.1004 for the cgi script. My cgi script calls the vendor's app that resides on the same machine using the Perl backtick operator. ... $result = "Result: " . `$vendorsPath/$vendorsExecutable $arg1 $arg2`; ... Right now I'm just running IE web browser locally on the server machine and accessing "http://localhost/cgi-bin/myPerlScript.pl". The vendor's app fails and logs a debug message that includes the following stack trace (I changed a couple names so as to not give away the vendor's identity): ... System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Runtime.InteropServices.COMException (0x80043A1D): 0x80040154 - Class not registered --- End of inner exception stack trace --- at System.RuntimeType.InvokeDispMethod(String name, BindingFlags invokeAttr, Object target, Object[] args, Boolean[] byrefModifiers, Int32 culture, String[] namedParameters) at System.RuntimeType.InvokeMember(String name, BindingFlags invokeAttr, Binder binder, Object target, Object[] args, ParameterModifier[] modifiers, CultureInfo culture, String[] namedParameters) at VendorsTool.Engine.Core.VendorsEngine.LoadVendorsServices(String fileName, String& projectCommPath) ... When I run the vendors app from the Windows command line on the server machine with the exact same arguments that the cgi script is passing it runs just fine, so there's something about invoking their app via the web script that is causing a problem. This problem is likely security related because the whole thing runs just fine on a Windows XP Pro machine (both command line and web invocation). I actually developed my web script there and got it completely working there before I tried moving it to the Windows Server 2003 machine. So what's different about the Windows Server 2003 machine that would keep the vendor's .net app from being executed successfully by a web cgi script? Can I fix this problem somehow to make it work on my server or will the vendor have to make a change to their .net app and ship out a new version? I'm probably the only person in the world who is trying to execute this vendor's app from a separate program, so I hate to bother the vendor with the issue if there's a workaround that I can implement myself here on my server machine. Plus, I'm in kind of a hurry and I don't want to wait 4 or 6 months for the vendor to put in a fix and deploy a new version. Thanks for any advise you can give.

    Read the article

  • How to track deleted self-tracking entities in ObservableCollection without memory leaks

    - by Yannick M.
    In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls. The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database. Self-Tracking Entities, as their name might suggest, track their state themselves. When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself. In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them. Something along the lines of: protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>(); protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection) { var deletedEntities = new List<TEntity>(); DeletedCollections[collection.GetHashCode()] = deletedEntities; collection.CollectionChanged += (o, a) => { if (a.OldItems != null) { deletedEntities.AddRange(a.OldItems.Cast<TEntity>()); } }; } Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along: ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers(); customers.RemoveAt(0); MyServiceProxy.UpdateCustomers(customers); At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side. This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary. I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection. But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution. Thanks!

    Read the article

  • Detecting 'stealth' web-crawlers

    - by Jacco
    What options are there to detect web-crawlers that do not want to be detected? (I know that listing detection techniques will allow the smart stealth-crawler programmer to make a better spider, but I do not think that we will ever be able to block smart stealth-crawlers anyway, only the ones that make mistakes.) I'm not talking about the nice crawlers such as googlebot and Yahoo! Slurp. I consider a bot nice if it: identifies itself as a bot in the user agent string reads robots.txt (and obeys it) I'm talking about the bad crawlers, hiding behind common user agents, using my bandwidth and never giving me anything in return. There are some trapdoors that can be constructed updated list (thanks Chris, gs): Adding a directory only listed (marked as disallow) in the robots.txt, Adding invisible links (possibly marked as rel="nofollow"?), style="display: none;" on link or parent container placed underneath another element with higher z-index detect who doesn't understand CaPiTaLiSaTioN, detect who tries to post replies but always fail the Captcha. detect GET requests to POST-only resources detect interval between requests detect order of pages requested detect who (consistently) requests https resources over http detect who does not request image file (this in combination with a list of user-agents of known image capable browsers works surprisingly nice) Some traps would be triggered by both 'good' and 'bad' bots. you could combine those with a whitelist: It trigger a trap It request robots.txt? It doest not trigger another trap because it obeyed robots.txt One other important thing here is: Please consider blind people using a screen readers: give people a way to contact you, or solve a (non-image) Captcha to continue browsing. What methods are there to automatically detect the web crawlers trying to mask themselves as normal human visitors. Update The question is not: How do I catch every crawler. The question is: How can I maximize the chance of detecting a crawler. Some spiders are really good, and actually parse and understand html, xhtml, css javascript, VB script etc... I have no illusions: I won't be able to beat them. You would however be surprised how stupid some crawlers are. With the best example of stupidity (in my opinion) being: cast all URLs to lower case before requesting them. And then there is a whole bunch of crawlers that are just 'not good enough' to avoid the various trapdoors.

    Read the article

  • Access Violation Using memcpy or Assignment to an Array in a Struct

    - by Synetech inc.
    Hi, I wrote a program last night that worked just fine but when I refactored it today to make it more extensible, I ended up with a problem. The original version had a hard-coded array of bytes. After some processing, some bytes were written into the array and then some more processing was done. To avoid hard-coding the pattern, I put the array in a structure so that I could add some related data and create an array of them. However now, I cannot write to the array in the structure. Here’s a pseudo-code example: main() { char pattern[]="\x32\x33\x12\x13\xba\xbb"; PrintData(pattern); pattern[2]='\x65'; PrintData(pattern); } That one works but this one does not: struct ENTRY { char* pattern; int somenum; }; main() { ENTRY Entries[] = { {"\x32\x33\x12\x13\xba\xbb\x9a\xbc", 44} , {"\x12\x34\x56\x78", 555} }; PrintData(Entries[0].pattern); Entries[0].pattern[2]='\x65'; //0xC0000005 exception!!! :( PrintData(Entries[0].pattern); } The second version causes an access violation exception on the assignment. I’m sure it’s because the second version allocates memory differently, but I’m starting to get a headache trying to figure out what’s what or how to get fix this. (I’m currently working around it by dynamically allocating a buffer of the same size as the pattern array, copying the pattern to the new buffer, making the changes to the buffer, using the buffer in the place of the pattern array, and then trying to remember to free the—temporary—buffer.) (Specifically, the original version cast the pattern array—+offset—to a DWORD* and assigned a DWORD constant to it to overwrite the four target bytes. The new version cannot do that since the length of the source is unknown—may not be four bytes—so it uses memcpy instead. I’ve checked and re-checked and have made sure that the pointers to memcpy are correct, but I still get an access violation. I use memcpy instead of str(n)cpy because I am using plain chars (as an array of bytes), not Unicode chars and ignoring the null-terminator. Using an assignment as above causes the same problem.) Any ideas? Thanks a lot.

    Read the article

  • Extracting pair member in lambda expressions and template typedef idiom

    - by Nazgob
    Hi, I have some complex types here so I decided to use nifty trick to have typedef on templated types. Then I have a class some_container that has a container as a member. Container is a vector of pairs composed of element and vector. I want to write std::find_if algorithm with lambda expression to find element that have certain value. To get the value I have to call first on pair and then get value from element. Below my std::find_if there is normal loop that does the trick. My lambda fails to compile. How to access value inside element which is inside pair? I use g++ 4.4+ and VS 2010 and I want to stick to boost lambda for now. #include <vector> #include <algorithm> #include <boost\lambda\lambda.hpp> #include <boost\lambda\bind.hpp> template<typename T> class element { public: T value; }; template<typename T> class element_vector_pair // idiom to have templated typedef { public: typedef std::pair<element<T>, std::vector<T> > type; }; template<typename T> class vector_containter // idiom to have templated typedef { public: typedef std::vector<typename element_vector_pair<T>::type > type; }; template<typename T> bool operator==(const typename element_vector_pair<T>::type & lhs, const typename element_vector_pair<T>::type & rhs) { return lhs.first.value == rhs.first.value; } template<typename T> class some_container { public: element<T> get_element(const T& value) const { std::find_if(container.begin(), container.end(), bind(&typename vector_containter<T>::type::value_type::first::value, boost::lambda::_1) == value); /*for(size_t i = 0; i < container.size(); ++i) { if(container.at(i).first.value == value) { return container.at(i); } }*/ return element<T>(); //whatever } protected: typename vector_containter<T>::type container; }; int main() { some_container<int> s; s.get_element(5); return 0; }

    Read the article

  • gcc 4.5 installation problem under ubuntu

    - by Claire Huang
    I tried to install gcc 4.5 on ubuntu 10.04 but failed. Here is a compile error that I don't know how to solve. Is there anyone successfully install the latest gcc on ubuntu? Following is my steps and the error message, I'd like to know where is the problem.... Step1: download these files: gcc-core-4.5.0.tar.gz gcc-g++-4.5.0.tar.gz gmp-4.3.2.tar.bz2 mpc-0.8.1.tar.gz mpfr-2.4.2.tar.gz Step2: Unzip above files Step3: move gmp, mpc, mpfr to the gcc-4.5.0/ directory. mv gmp-4.3.2 gcc-4.5.0/gmp mv mpc-0.8.1 gcc-4.5.0/mpc mv mpfr-2.4.2 gcc-4.5.0/mpfr Step4: go to gcc-4.5.0 directory and do configuration: sudo ./configure Step5: compile and install sudo make sudo make install The first 4 steps is OK, I can configure it successfully. However, when I try to compile it, following error message comes out, I cannot figure out what the problem is. Should I change the name from "gcc 4.5" to "gcc"?? It's a little strange that we need to do this by ourself. Is there anything I missed during the installation? xxx@xxx-laptop:/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0$ sudo make [sudo] password for xxx: [ -f stage_final ] || echo stage3 > stage_final /bin/bash: line 2: test: /media/Data/Tool/linux/gcc: binary operator expected /bin/bash: /media/Data/Tool/linux/gcc: No such file or directory make[1]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[3]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' rm -f stage_current make[3]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' Configuring stage 1 in host-x86_64-unknown-linux-gnu/intl /bin/bash: /media/Data/Tool/linux/gcc: No such file or directory make[2]: *** [configure-stage1-intl] Error 127 make[2]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make: *** [all] Error 2

    Read the article

  • sqlite3-ruby can't make on rvm 1.8.7

    - by Josh Crews
    Upgrading to Rails 3 by starting with RVM 1.8.7. OSX 10.5.8 Output: josh-crewss-macbook:~ joshcrews$ gem install sqlite3-rubyBuilding native extensions. This could take a while...ERROR: Error installing sqlite3-ruby: ERROR: Failed to build gem native extension. /Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/bin/ruby extconf.rb checking for sqlite3.h... yes checking for sqlite3_libversion_number() in -lsqlite3... yes checking for rb_proc_arity()... no checking for sqlite3_column_database_name()... no checking for sqlite3_enable_load_extension()... no checking for sqlite3_load_extension()... no creating Makefile make gcc -I. -I. -I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0 -I. -I/usr/local/include -I/opt/local/include -I/usr/include -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -g -O2 -fno-common -pipe -fno-common -O3 -Wall -Wcast-qual -Wwrite-strings -Wconversion -Wmissing-noreturn -Winline -c database.c database.c: In function ‘deallocate’: database.c:17: warning: implicit declaration of function ‘sqlite3_next_stmt’ database.c:17: warning: assignment makes pointer from integer without a cast database.c: In function ‘initialize’: database.c:76: warning: implicit declaration of function ‘sqlite3_open_v2’ database.c:79: error: ‘SQLITE_OPEN_READWRITE’ undeclared (first use in this function) database.c:79: error: (Each undeclared identifier is reported only once database.c:79: error: for each function it appears in.) database.c:79: error: ‘SQLITE_OPEN_CREATE’ undeclared (first use in this function) database.c: In function ‘set_sqlite3_func_result’: database.c:277: error: ‘sqlite3_int64’ undeclared (first use in this function) database.c: In function ‘rb_sqlite3_func’: database.c:311: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype database.c: In function ‘rb_sqlite3_step’: database.c:378: warning: passing argument 1 of ‘ruby_xcalloc’ as signed due to prototype make: *** [database.o] Error 1 Gem list (these are under RVM, under system I've got lot more gems included the sqlite3-ruby that's worked for 1.5 years) josh-crewss-macbook:~ joshcrews$ gem list *** LOCAL GEMS *** abstract (1.0.0) actionmailer (3.0.0.beta3) actionpack (3.0.0.beta3) activemodel (3.0.0.beta3) activerecord (3.0.0.beta3) activeresource (3.0.0.beta3) activesupport (3.0.0.beta3, 2.3.8) arel (0.3.3) builder (2.1.2) bundler (0.9.25) capybara (0.3.8) configuration (1.1.0) cucumber (0.7.2) cucumber-rails (0.3.1) culerity (0.2.10) database_cleaner (0.5.2) diff-lcs (1.1.2) erubis (2.6.5) ffi (0.6.3) gherkin (1.0.30) i18n (0.4.0, 0.3.7) json_pure (1.4.3) launchy (0.3.5) mail (2.2.1) memcache-client (1.8.3) mime-types (1.16) nokogiri (1.4.2) polyglot (0.3.1) rack (1.1.0) rack-mount (0.6.3) rack-test (0.5.4) rails (3.0.0.beta3) railties (3.0.0.beta3) rake (0.8.7) rdoc (2.5.8) rspec (2.0.0.beta.10, 2.0.0.beta.8) rspec-core (2.0.0.beta.10, 2.0.0.beta.8) rspec-expectations (2.0.0.beta.10, 2.0.0.beta.8) rspec-mocks (2.0.0.beta.10, 2.0.0.beta.8) rspec-rails (2.0.0.beta.10, 2.0.0.beta.8) rubygems-update (1.3.7) selenium-webdriver (0.0.20) spork (0.8.3) term-ansicolor (1.0.5) text-format (1.0.0) text-hyphen (1.0.0) thor (0.13.6) treetop (1.4.8) trollop (1.16.2) tzinfo (0.3.22) webrat (0.7.1) Version of XCode: 3.1.1 My suspicion is it has to do with "-I/Users/joshcrews/.rvm/rubies/ruby-1.8.7-p174/lib/ruby/1.8/i686-darwin9.8.0", because i686-darwin9.8.0 doesnt exist in that file

    Read the article

  • Linq to List and IEnumerable issues

    - by Otaku
    I am querying an HTML file with Linq. It looks something like this: <html> <body> <div class="Players"> <div class="role">Goalies</div> <div class="name">John Smith</div> <div class="name">Shawn Xie</div> <div class="role">Right Wings</div> <div class="name">Jack Davis</div> <div class="name">Carl Yuns</div> <div class="name">Wayne Gortonia</div> <div class="role">Centers</div> <div class="name">Lutz Gaspy</div> <div class="name">John Jacobs</div> </div </html> </body> What I'm trying to do is create a list of these folks like in a list of a structure called Players: Structure Players Public Name As String Public Position As String End Structure But I've quickly found out I don't really know what I'm doing when it comes to Linq. I've got this far my my queries: Dim goalieList = From d In player.Elements _ Where d.Value = "Goalies" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Goalie", _ .Name = g.Value} Dim centersList = From d In player.Elements _ Where d.Value = "Centers" _ Select From g In d.ElementsAfterSelf _ Take While (g.@class <> "role") _ Select New Players With {.Position = "Centers", _ .Name = g.Value} Which gets me down to the the players by position, but then I can't do much with this afterwards the result type is System.Collections.Generic.IEnumerable(Of System.Collections.Generic.IEnumerable(Of Player)) What I want to do is add these two results to a new list, like: Dim playersList As List(Of Players) = Nothing playersList.AddRange(centersList) playersList.AddRange(goalieList) So that I can then query the list and use it. But it kicks the error: Unable to cast object of type 'WhereSelectEnumerableIterator2[System.Xml.Linq.XElement,System.Collections.Generic.IEnumerable1[Players]]' to type 'System.Collections.Generic.IEnumerable`1[Players]' As you can see, I may really have no idea how to work with all these objects/classes. Does anyone have any insight on what I may be doing wrong and how I can resolve it? RESOLVED: The Linq query needs to return a single iEnumerable, like this: Dim goalieList = From l In _ (From d In players.Elements _ Where d.Value = "Goalies" _ Select d.ElementsAfterSelf.TakeWhile(Function(f) f.@class <> "role")) _ Select New Players With {.Position = "Goalie", .Name = l.Value} and then use goalieList.ToList

    Read the article

  • gwt-RPC problem! what is the best practice on using gwt-RPC?

    - by guaz
    Dear all, I want draw a chart based on the date retrieve from the database by using RPC. But everytime I fail to get the result. My rpc function is working. I think is the sequence of the process. below is my class: public class TrafficPattern_1 extends GChart { TrafficPattern_1() { final DBServiceAsync dbService = GWT .create(DBService.class); dbService.SendData(null, null, new AsyncCallback<Container_TrafficPattern>() { @Override public void onFailure(Throwable caught) { } @Override public void onSuccess(Container_TrafficPattern result) { // TODO Auto-generated method stub pContainer.SetaDate(result.aDate.get(1)); } }); pContainer.aDate.get(0); setChartSize(350, 200); setChartTitle("<h2>Temperature vs Time<h2>"); setPadding("8px"); //setPixelSize(380, 200); getXAxis().setAxisLabel("<small><b><i>Time</i></b></small>"); getXAxis().setHasGridlines(true); getXAxis().setTickCount(6); // Except for "=(Date)", a standard GWT DateTimeFormat string getXAxis().setTickLabelFormat("=(Date)h:mm a"); getYAxis().setAxisLabel("<small><b><i>&deg;C</i></b></small>"); getYAxis().setHasGridlines(true); getYAxis().setTickCount(11); getYAxis().setAxisMin(11); getYAxis().setAxisMax(16); addCurve(); getCurve().setLegendLabel("<i> </i>"); getCurve().getSymbol().setBorderColor("blue"); getCurve().getSymbol().setBackgroundColor("blue"); // getCurve().getSymbol().setFillSpacing(10); // getCurve().getSymbol().setFillThickness(3); getCurve().getSymbol().setSymbolType(SymbolType.LINE); getCurve().getSymbol().setFillThickness(2); getCurve().getSymbol().setFillSpacing(1); for (int i = 0; i < dateSequence.length; i++) // Note that getTime() returns milliseconds since // 1/1/70--required whenever "date cast" tick label // formats (those beginning with "=(Date)") are used. getCurve().addPoint(dateSequence[i].date.getTime(), dateSequence[i].value); }

    Read the article

  • QueryInterface fails at casting inside COM-interface implementation

    - by brecht
    I am creating a tool in c# to retrieve messages of a CAN-network (network in a car) using an Dll written in C/C++. This dll is usable as a COM-interface. My c#-formclass implements one of these COM-interfaces. And other variables are instantiated using these COM-interfaces (everything works perfect). The problem: The interface my C#-form implements has 3 abstract functions. One of these functions is called -by the dll- and i need to implement it myself. In this function i wish to retrieve a property of a form-wide variable that is of a COM-type. The COM library is CANSUPPORTLib The form-wide variable: private CANSUPPORTLib.ICanIOEx devices = new CANSUPPORTLib.CanIO(); This variable is also form-wide and is retrieved via the devices-variable: canreceiver = (CANSUPPORTLib.IDirectCAN2)devices.get_DirectDispatch(receiverLogicalChannel); The function that is called by the dll and implemented in c# public void Message(double dTimeStamp) { Console.WriteLine("!!! message ontvangen !!!" + Environment.NewLine); try { CANSUPPORTLib.can_msg_tag message = new CANSUPPORTLib.can_msg_tag(); message = (CANSUPPORTLib.can_msg_tag) System.Runtime.InteropServices.Marshal.PtrToStructure(canreceiver.RawMessage, message.GetType()); for (int i = 0; i < message.data.Length; i++) { Console.WriteLine("byte " + i + ": " + message.data[i]); } } catch (Exception e) { Console.WriteLine(e.Message); } } The error rises at this line: message = (CANSUPPORTLib.can_msg_tag)System.Runtime.InteropServices.Marshal.PtrToStructure(canreceiver.RawMessage, message.GetType()); Error: Unable to cast COM object of type 'System.__ComObject' to interface type 'CANSUPPORTLib.IDirectCAN2'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{33373EFC-DB42-48C4-A719-3730B7F228B5}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)). Notes: It is possible to have a timer-clock that checks every 100ms for the message i need. The message is then retrieved in the exact same way as i do now. This timer is started when the form starts. The checking is only done when Message(double) has put a variable to true (a message arrived). When the timer-clock is started in the Message function, i have the same error as above Starting another thread when the form starts, is also not possible. Is there someone with experience with COM-interop ? When this timer

    Read the article

  • Chaining IQueryables together

    - by Matt Greer
    I have a RIA Services based app that is using Entity Framework on the server side (possibly not relevant). In my real app, I can do something like this. EntityQuery<Status> query = statusContext.GetStatusesQuery().Where(s => s.Description.Contains("Foo")); Where statusContext is the client side subclass of DomainContext that RIA Services was kind enough to generate for me. The end result is an EntityQuery<Status> object who's Query property is an object that implements IQueryable and represents my where clause. The WebDomainClient is able to take this EntityQuery and not just give me back all of my Statuses but also filtered with my where clause. I am trying to implement this in a mock DomainClient. This MockDomainClient accepts an IQueryably<Entity> which it returns when asked for. But what if the user makes the query and includes the ad hoc additional query? How can I merge the two together? My MockDomainClient is (this is modeled after this blog post) ... public class MockDomainClient : LocalDomainClient { private IQueryable<Entity> _entities; public MockDomainClient(IQueryable<Entity> entities) { _entities = entities; } public override IQueryable<Entity> DoQuery(EntityQuery query) { if (query.Query == null) { return _entities; } // otherwise want the union of _entities and query.Query, query.Query is IQueryable // the below does not work and was a total shot in the dark: //return _entities.Union(query.Query.Cast<Entity>()); } } public abstract class LocalDomainClient : System.ServiceModel.DomainServices.Client.DomainClient { private SynchronizationContext _syncContext; protected LocalDomainClient() { _syncContext = SynchronizationContext.Current; } ... public abstract IQueryable<Entity> DoQuery(EntityQuery query); protected override IAsyncResult BeginQueryCore(EntityQuery query, AsyncCallback callback, object userState) { IQueryable<Entity> localQuery = DoQuery(query); LocalAsyncResult asyncResult = new LocalAsyncResult(callback, userState, localQuery); _syncContext.Post(o => (o as LocalAsyncResult).Complete(), asyncResult); return asyncResult; } ... }

    Read the article

  • Hive NR map progress inconsistent and regurlarly restart from 0%

    - by user92471
    I have a Yarn MR (with two ec2 instances to mapreduce) job on a dataset of approximately a thousand avro records, and the map phase is behaving erratically. See the progress below. Of course i checked the logs on resourcemanager and nodemanagers and saw nothing suspicious, but these logs are too verbose What is going on there ? hive> select * from nikon where qs_cs_s_aid='VIEW' limit 10; Total MapReduce jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1352281315350_0020, Tracking URL = http://blabla.ec2.internal:8088/proxy/application_1352281315350_0020/ Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=blabla.com:8032 -kill job_1352281315350_0020 Hadoop job information for Stage-1: number of mappers: 4; number of reducers: 0 2012-11-07 11:14:40,976 Stage-1 map = 0%, reduce = 0% 2012-11-07 11:15:06,136 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 10.38 sec 2012-11-07 11:15:07,253 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.18 sec 2012-11-07 11:15:08,371 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.18 sec 2012-11-07 11:15:09,491 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 12.18 sec 2012-11-07 11:15:10,643 Stage-1 map = 2%, reduce = 0%, Cumulative CPU 15.42 sec (...) 2012-11-07 11:15:35,441 Stage-1 map = 28%, reduce = 0%, Cumulative CPU 37.77 sec 2012-11-07 11:15:36,486 Stage-1 map = 28%, reduce = 0%, Cumulative CPU 37.77 sec here restart at 16% ? 2012-11-07 11:15:37,692 Stage-1 map = 16%, reduce = 0%, Cumulative CPU 21.15 sec 2012-11-07 11:15:38,815 Stage-1 map = 16%, reduce = 0%, Cumulative CPU 21.15 sec 2012-11-07 11:15:39,865 Stage-1 map = 16%, reduce = 0%, Cumulative CPU 21.15 sec 2012-11-07 11:15:41,064 Stage-1 map = 18%, reduce = 0%, Cumulative CPU 22.4 sec 2012-11-07 11:15:42,181 Stage-1 map = 18%, reduce = 0%, Cumulative CPU 22.4 sec 2012-11-07 11:15:43,299 Stage-1 map = 18%, reduce = 0%, Cumulative CPU 22.4 sec here restart at 0% ? 2012-11-07 11:15:44,418 Stage-1 map = 0%, reduce = 0% 2012-11-07 11:16:02,076 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 6.86 sec 2012-11-07 11:16:03,193 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 6.86 sec 2012-11-07 11:16:04,259 Stage-1 map = 2%, reduce = 0%, Cumulative CPU 8.45 sec (...) 2012-11-07 11:16:31,291 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 35.34 sec 2012-11-07 11:16:32,414 Stage-1 map = 26%, reduce = 0%, Cumulative CPU 37.93 sec here restart at 11% ? 2012-11-07 11:16:33,459 Stage-1 map = 11%, reduce = 0%, Cumulative CPU 19.53 sec 2012-11-07 11:16:34,507 Stage-1 map = 11%, reduce = 0%, Cumulative CPU 19.53 sec 2012-11-07 11:16:35,731 Stage-1 map = 13%, reduce = 0%, Cumulative CPU 21.47 sec (...) 2012-11-07 11:16:46,839 Stage-1 map = 17%, reduce = 0%, Cumulative CPU 24.14 sec here restart at 0% ? 2012-11-07 11:16:47,939 Stage-1 map = 0%, reduce = 0% 2012-11-07 11:16:56,653 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 7.54 sec 2012-11-07 11:16:57,814 Stage-1 map = 1%, reduce = 0%, Cumulative CPU 7.54 sec (...) Needless to say the job crashes after some time with an Error: java.io.IOException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException: -56

    Read the article

  • Handling Model Inheritance in ASP.NET MVC2 & NHibernate

    - by enth
    I've gotten myself stuck on how to handle inheritance in my model when it comes to my controllers/views. Basic Model: public class Procedure : Entity { public Procedure() { } public int Id { get; set; } public DateTime ProcedureDate { get; set; } public ProcedureType Type { get; set; } } public ProcedureA : Procedure { public double VariableA { get; set; } public int VariableB { get; set; } public int Total { get; set; } } public ProcedureB : Procedure { public int Score { get; set; } } etc... many of different procedures eventually. So, I do things like list all the procedures: public class ProcedureController : Controller { public virtual ActionResult List() { IEnumerable<Procedure> procedures = _repository.GetAll(); return View(procedures); } } but now I'm kinda stuck. Basically, from the list page, I need to link to pages where the specific subclass details can be viewed/edited and I'm not sure what the best strategy is. I thought I could add an action on the ProcedureController that would conjure up the right subclass by dynamically figuring out what repository to use and loading the subclass to pass to the view. I had to store the class in the ProcedureType object. I had to create/implement a non-generic IRepository since I can't dynamically cast to a generic one. public virtual ActionResult Details(int procedureID) { Procedure procedure = _repository.GetById(procedureID, false); string className = procedure.Type.Class; Type type = Type.GetType(className, true); Type repositoryType = typeof (IRepository<>).MakeGenericType(type); var repository = (IRepository)DependencyRegistrar.Resolve(repositoryType); Entity procedure = repository.GetById(procedureID, false); return View(procedure); } I haven't even started sorting out how the view is going to determine which partial to load to display the subclass details. I'm wondering if this is a good approach? This makes determining the URL easy. It makes reusing the Procedure display code easy. Another approach is specific controllers for each subclass. It simplifies the controller code, but also means many simple controllers for the many procedure subclasses. Can work out the shared Procedure details with a partial view. How to get to construct the URL to get to the controller/action in the first place? Time to not think about it. Hopefully someone can show me the light. Thanks in advance.

    Read the article

  • Reflection - SetValue of array within class?

    - by Jaymz87
    OK, I've been working on something for a while now, using reflection to accomplish a lot of what I need to do, but I've hit a bit of a stumbling block... I'm trying to use reflection to populate the properties of an array of a child property... not sure that's clear, so it's probably best explained in code: Parent Class: Public Class parent Private _child As childObject() Public Property child As childObject() Get Return _child End Get Set(ByVal value As child()) _child = value End Set End Property End Class Child Class: Public Class childObject Private _name As String Public Property name As String Get Return _name End Get Set(ByVal value As String) _name = value End Set End Property Private _descr As String Public Property descr As String Get Return _descr End Get Set(ByVal value As String) _descr = value End Set End Property End Class So, using reflection, I'm trying to set the values of the array of child objects through the parent object... I've tried several methods... the following is pretty much what I've got at the moment (I've added sample data just to keep things simple): Dim Results(1) As String Results(0) = "1,2" Results(1) = "2,3" Dim parent As New parent Dim child As childObject() = New childObject() {} Dim PropInfo As PropertyInfo() = child.GetType().GetProperties() Dim i As Integer = 0 For Each res As String In Results Dim ResultSet As String() = res.Split(",") ReDim child(i) Dim j As Integer = 0 For Each PropItem As PropertyInfo In PropInfo PropItem.SetValue(child, ResultSet(j), Nothing) j += 1 Next i += 1 Next parent.child = child This fails miserably on PropItem.SetValue with ArgumentException: Property set method not found. Anyone have any ideas? @Jon :- Thanks, I think I've gotten a little further, by creating individual child objects, and then assigning them to an array... The issue is now trying to get that array assigned to the parent object (using reflection). It shouldn't be difficult, but I think the problem comes because I don't necessarily know the parent/child types. I'm using reflection to determine which parent/child is being passed in. The parent always has only one property, which is an array of the child object. When I try assigning the child array to the parent object, I get a invalid cast exception saying it can't convert Object[] to . EDIT: Basically, what I have now is: Dim PropChildInfo As PropertyInfo() = ResponseObject.GetType().GetProperties() For Each PropItem As PropertyInfo In PropChildInfo PropItem.SetValue(ResponseObject, ResponseChildren, Nothing) Next ResponseObject is an instance of the parent Class, and ResponseChildren is an array of the childObject Class. This fails with: Object of type 'System.Object[]' cannot be converted to type 'childObject[]'.

    Read the article

  • DynamicMethod for ConstructorInfo.Invoke, what do I need to consider?

    - by Lasse V. Karlsen
    My question is this: If I'm going to build a DynamicMethod object, corresponding to a ConstructorInfo.Invoke call, what types of IL do I need to implement in order to cope with all (or most) types of arguments, when I can guarantee that the right type and number of arguments is going to be passed in before I make the call? Background I am on my 3rd iteration of my IoC container, and currently doing some profiling to figure out if there are any areas where I can easily shave off large amounts of time being used. One thing I noticed is that when resolving to a concrete type, ultimately I end up with a constructor being called, using ConstructorInfo.Invoke, passing in an array of arguments that I've worked out. What I noticed is that the invoke method has quite a bit of overhead, and I'm wondering if most of this is just different implementations of the same checks I do. For instance, due to the constructor matching code I have, to find a matching constructor for the predefined parameter names, types, and values that I have passed in, there's no way this particular invoke call will not end up with something it should be able to cope with, like the correct number of arguments, in the right order, of the right type, and with appropriate values. When doing a profiling session containing a million calls to my resolve method, and then replacing it with a DynamicMethod implementation that mimics the Invoke call, the profiling timings was like this: ConstructorInfo.Invoke: 1973ms DynamicMethod: 93ms This accounts for around 20% of the total runtime of this profiling application. In other words, by replacing the ConstructorInfo.Invoke call with a DynamicMethod that does the same, I am able to shave off 20% runtime when dealing with basic factory-scoped services (ie. all resolution calls end up with a constructor call). I think this is fairly substantial, and warrants a closer look at how much work it would be to build a stable DynamicMethod generator for constructors in this context. So, the dynamic method would take in an object array, and return the constructed object, and I already know the ConstructorInfo object in question. Therefore, it looks like the dynamic method would be made up of the following IL: l001: ldarg.0 ; the object array containing the arguments l002: ldc.i4.0 ; the index of the first argument l003: ldelem.ref ; get the value of the first argument l004: castclass T ; cast to the right type of argument (only if not "Object") (repeat l001-l004 for all parameters, l004 only for non-Object types, varying l002 constant from 0 and up for each index) l005: newobj ci ; call the constructor l006: ret Is there anything else I need to consider? Note that I'm aware that creating dynamic methods will probably not be available when running the application in "reduced access mode" (sometimes the brain just won't give up those terms), but in that case I can easily detect that and just calling the original constructor as before, with the overhead and all.

    Read the article

< Previous Page | 193 194 195 196 197 198 199 200 201 202 203 204  | Next Page >