Search Results

Search found 11537 results on 462 pages for 'double submit prevention'.

Page 82/462 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • I have problems with adding rows to data-binded DataGridView in desktop app.

    - by Mishko
    DataTable table1 = new DataTable(); double brutoUkupno1 = 0; double porezUkupno1 = 0; double doprinosUkupno1 = 0; double netoUkupno1 = 0; double doprinosTeretUkupno1 = 0; double topliObrokUkupno1 = 0; double regresUkupno1 = 0; Connection con = new Connection(); table1 = con.boundTable(month, Convert.ToInt32(year)); //This is method which returns DataTable table1.Rows.Add(null, null, null, null, null, null, null, null, null, null, null, null, null, null); table1.Rows.Add(null, null, null, null, null, null, null, null, null, null, null, null, null, null); dgv2.Visible = true; dgv2.DataSource = table1; for (int i = 0; i < dgv2.RowCount - 2; i++) { topliObrokUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[7].Value); regresUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[8].Value); brutoUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[9].Value); porezUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[10].Value); doprinosUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[11].Value); netoUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[12].Value); doprinosTeretUkupno1 += Convert.ToDouble(dgv2.Rows[i].Cells[13].Value); //Now I am having problems with this below, putting things above to dgv2 : } dgv2.Rows[dgv2.Rows.Count - 1].Cells[0].Value = "Ukupno"; dgv2.Rows[dgv2.Rows.Count - 1].Cells[3].Value = month.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[4].Value = year.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[7].Value = topliObrokUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[8].Value = regresUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[9].Value = brutoUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[10].Value = porezUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[11].Value = doprinosUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[12].Value = netoUkupno1.ToString(); dgv2.Rows[dgv2.Rows.Count - 1].Cells[13].Value = doprinosTeretUkupno1.ToString(); dgv2.Rows[dgv2.RowCount - 2].Height = 3; dgv2.Rows[dgv2.RowCount - 2].DefaultCellStyle.BackColor = Color.Black;

    Read the article

  • Multiset container appears to stop sorting

    - by Sarah
    I would appreciate help debugging some strange behavior by a multiset container. Occasionally, the container appears to stop sorting. This is an infrequent error, apparent in only some simulations after a long time, and I'm short on ideas. (I'm an amateur programmer--suggestions of all kinds are welcome.) My container is a std::multiset that holds Event structs: typedef std::multiset< Event, std::less< Event > > EventPQ; with the Event structs sorted by their double time members: struct Event { public: explicit Event(double t) : time(t), eventID(), hostID(), s() {} Event(double t, int eid, int hid, int stype) : time(t), eventID( eid ), hostID( hid ), s(stype) {} bool operator < ( const Event & rhs ) const { return ( time < rhs.time ); } double time; ... }; The program iterates through periods of adding events with unordered times to EventPQ currentEvents and then pulling off events in order. Rarely, after some events have been added (with perfectly 'legal' times), events start getting executed out of order. What could make the events ever not get ordered properly? (Or what could mess up the iterator?) I have checked that all the added event times are legitimate (i.e., all exceed the current simulation time), and I have also confirmed that the error does not occur because two events happen to get scheduled for the same time. I'd love suggestions on how to work through this. The code for executing and adding events is below for the curious: double t = 0.0; double nextTimeStep = t + EPID_DELTA_T; EventPQ::iterator eventIter = currentEvents.begin(); while ( t < EPID_SIM_LENGTH ) { // Add some events to currentEvents while ( ( *eventIter ).time < nextTimeStep ) { Event thisEvent = *eventIter; t = thisEvent.time; executeEvent( thisEvent ); eventCtr++; currentEvents.erase( eventIter ); eventIter = currentEvents.begin(); } t = nextTimeStep; nextTimeStep += EPID_DELTA_T; } void Simulation::addEvent( double et, int eid, int hid, int s ) { assert( currentEvents.find( Event(et) ) == currentEvents.end() ); Event thisEvent( et, eid, hid, s ); currentEvents.insert( thisEvent ); }

    Read the article

  • DataGridView validating old value insted of new value.

    - by Scott Chamberlain
    I have a DataGridView that is bound to a DataTable, it has a column that is a double and the values need to be between 0 and 1. Here is my code private void dgvImpRDP_InfinityRDPLogin_CellValidating(object sender, DataGridViewCellValidatingEventArgs e) { if (e.ColumnIndex == dtxtPercentageOfUsersAllowed.Index) { double percentage; if(dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value.GetType() == typeof(double)) percentage = (double)dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value; else if (!double.TryParse(dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value.ToString(), out percentage)) { e.Cancel = true; dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].ErrorText = "The value must be between 0 and 1"; return; } if (percentage < 0 || percentage > 1) { e.Cancel = true; dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].ErrorText = "The value must be between 0 and 1"; } } } However my issue when dgvImpRDP_InfinityRDPLogin_CellValidating fires dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value will contain the old value before the edit, not the new value. For example lets say the old value was .1 and I enter 3. The above code runs when you exit the cell and dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].Value will be .1 for that run, the code validates and writes 3 the data to the DataTable. I click on it a second time, try to leave, and this time it behaves like it should, it raises the error icon for the cell and prevents me from leaving. I try to enter the correct value (say .7) but the the Value will still be 3 and there is now no way out of the cell because it is locked due to the error and my validation code will never push the new value. Any recommendations would be greatly appreciated. EDIT -- New version of the code based off of Stuart's suggestion and mimicking the style the MSDN article uses. Still behaves the same. private void dgvImpRDP_InfinityRDPLogin_CellValidating(object sender, DataGridViewCellValidatingEventArgs e) { if (e.ColumnIndex == dtxtPercentageOfUsersAllowed.Index) { dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].ErrorText = String.Empty; double percentage; if (!double.TryParse(dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].FormattedValue.ToString(), out percentage) || percentage < 0 || percentage > 1) { e.Cancel = true; dgvImpRDP_InfinityRDPLogin[e.ColumnIndex, e.RowIndex].ErrorText = "The value must be between 0 and 1"; return; } } }

    Read the article

  • validation functions in google apps script is not working properly

    - by chocka
    I create a i/p form in google site using Apps Script and i did the validation using the Apps Script coding. Validation functions available in Apps script is not satisfying all the possibility of checking the error. function validate(e) { var app = UiApp.getActiveApplication(); var flag=0; var text = app.getElementById('name'); var textrequired = app.getElementById('namerequired'); var number = app.getElementById('number'); var numberrequired = app.getElementById('numberrequired'); var email = app.getElementById('email'); var emailrequired = app.getElementById('emailrequired'); var submit = app.getElementById('submit_button'); var valid = app.createClientHandler() .validateNumber(number) .validateNotInteger(text) .validateEmail(email) .forTargets(submit).setEnabled(true) .forTargets(number,text,email).setStyleAttribute("color","black") .forTargets(numberrequired,textrequired,emailrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); var invalidno = app.createClientHandler().validateNotNumber(number).validateMatches(number, '').forTargets(number).setStyleAttribute("color", "red").forTargets(submit).setEnabled(false).forTargets(numberrequired).setText('Please Enter a Valid No.').setStyleAttribute("color", "red").setVisible(true); var validno = app.createClientHandler().validateNumber(number).forTargets(number).setStyleAttribute("color","black").forTargets(numberrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); var invalidText=app.createClientHandler().validateNumber(text).validateMatches(text, '').forTargets(text).setStyleAttribute("color", "red").forTargets(submit).setEnabled(false).forTargets(textrequired).setText('Please Enter a Valid Name.').setStyleAttribute("color", "red").setVisible(true); var validText=app.createClientHandler().validateNotNumber(text).forTargets(text).setStyleAttribute("color","black").forTargets(textrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); var invalidemail=app.createClientHandler().validateNotEmail(email).validateMatches(email, '').forTargets(email).setStyleAttribute("color", "red").forTargets(submit).setEnabled(false).forTargets(emailrequired).setText('Please Enter a Valid Mail-Id.').setStyleAttribute("color", "red").setVisible(true); var validemail=app.createClientHandler().validateEmail(email).forTargets(email).setStyleAttribute("color","black").forTargets(emailrequired).setText('*').setStyleAttribute("color", "red").setVisible(true); number.addKeyPressHandler(invalidno).addKeyPressHandler(validno).addKeyPressHandler(valid).addKeyPressHandler(invalidText).addKeyPressHandler(invalidemail); text.addKeyPressHandler(invalidText).addKeyPressHandler(validText).addKeyPressHandler(valid).addKeyPressHandler(invalidno).addKeyPressHandler(invalidemail); email.addKeyPressHandler(invalidemail).addKeyPressHandler(validemail).addKeyPressHandler(valid).addKeyPressHandler(invalidno).addKeyPressHandler(invalidText); if (text == ''){flag = 1;} if (email == ''){flag = 1;} if (number == ''){flag = 1;} if(flag == 1){submit.setEnabled(false);} return app; } I just placed my Validation function using Apps Script. I don't know why its not satisfying all the possibilities of the validation. And also i have to do is to enable the submit button after all the fields satisfy the validation. After once it enabled, if i make any error in any field it will not get disable correctly. I wrote the coding correctly i think so. Please take a look at my validation function and give me some suggestion to make it possible. Please guide me, Thanks & Regards, chocka.

    Read the article

  • Returning different data types C#

    - by user1810659
    i have create a class library (DLL) with many different methods. and the return different types of data(string string[] double double[]). Therefore i have created one class i called CustomDataType for all the methods containing different data types so each method in the Library can return object of the custom class and this way be able to return multiple data types I have done it like this: public class CustomDataType { public double Value; public string Timestamp; public string Description; public string Unit; // special for GetparamterInfo public string OpcItemUrl; public string Source; public double Gain; public double Offset; public string ParameterName; public int ParameterID; public double[] arrayOfValue; public string[] arrayOfTimestamp; // public string[] arrayOfParameterName; public string[] arrayOfUnit; public string[] arrayOfDescription; public int[] arrayOfParameterID; public string[] arrayOfItemUrl; public string[] arrayOfSource; public string[] arrayOfModBusRegister; public string[] arrayOfGain; public string[] arrayOfOffset; } The Library contains methods like these: public CustomDataType GetDeviceParameters(string deviceName) { ...................... code getDeviceParametersObj.arrayOfParameterName; return getDeviceParametersObj; } public CustomDataType GetMaxMin(string parameterName, string period, string maxMin) { .....................................code getMaxMingObj.Value = (double)reader["MaxMinValue"]; getMaxMingObj.Timestamp = reader["MeasurementDateTime"].ToString(); getMaxMingObj.Unit = reader["Unit"].ToString(); getMaxMingObj.Description = reader["Description"].ToString(); return getMaxMingObj; } public CustomDataType GetSelectedMaxMinData(string[] parameterName, string period, string mode) {................................code selectedMaxMinObj.arrayOfValue = MaxMinvalueList.ToArray(); selectedMaxMinObj.arrayOfTimestamp = MaxMintimeStampList.ToArray(); selectedMaxMinObj.arrayOfDescription = MaxMindescriptionList.ToArray(); selectedMaxMinObj.arrayOfUnit = MaxMinunitList.ToArray(); return selectedMaxMinObj; } As illustrated thi different methods returns different data types,and it works fine for me but when i import the DLL and want to use the methods Visual studio shwos all the data types in the CustomDataType class as suggestion for all the methods even though the return different data.This is illusrtated in the picture below. As we can see from the picture with the suggestion of all the different return data the user can get confused and choose wrong return data for some of the methods. So my question is how can i improve this. so Visual studio suggest just the belonging return data type for each method.

    Read the article

  • Is throwing an exception a healthy way to exit?

    - by ramaseshan
    I have a setup that looks like this. class Checker { // member data Results m_results; // see below public: bool Check(); private: bool Check1(); bool Check2(); // .. so on }; Checker is a class that performs lengthy check computations for engineering analysis. Each type of check has a resultant double that the checker stores. (see below) bool Checker::Check() { // initilisations etc. Check1(); Check2(); // ... so on } A typical Check function would look like this: bool Checker::Check1() { double result; // lots of code m_results.SetCheck1Result(result); } And the results class looks something like this: class Results { double m_check1Result; double m_check2Result; // ... public: void SetCheck1Result(double d); double GetOverallResult() { return max(m_check1Result, m_check2Result, ...); } }; Note: all code is oversimplified. The Checker and Result classes were initially written to perform all checks and return an overall double result. There is now a new requirement where I only need to know if any of the results exceeds 1. If it does, subsequent checks need not be carried out(it's an optimisation). To achieve this, I could either: Modify every CheckN function to keep check for result and return. The parent Check function would keep checking m_results. OR In the Results::SetCheckNResults(), throw an exception if the value exceeds 1 and catch it at the end of Checker::Check(). The first is tedious, error prone and sub-optimal because every CheckN function further branches out into sub-checks etc. The second is non-intrusive and quick. One disadvantage is I can think of is that the Checker code may not necessarily be exception-safe(although there is no other exception being thrown anywhere else). Is there anything else that's obvious that I'm overlooking? What about the cost of throwing exceptions and stack unwinding? Is there a better 3rd option?

    Read the article

  • WP7 Return the last 7 days of data from an xml web service

    - by cvandal
    Hello, I'm trying to return the last 7 days of data from an xml web service but with no luck. Could someone please explain me to how I would accomplish this? The XML is as follows: <node> <api> <usagelist> <usage day="2011-01-01"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-02"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-03"> <traffic name="total" unit="bytes">23579797</traffic> </usage> <usage day="2011-01-04"> <traffic name="total" unit="bytes">23579797</traffic> </usage> </usagelist> </api> </node> EDIT The data I want to retrieve will be used to populate a line graph. Specificly I require the day attribute value and the traffic element value for the past 7 days. At the moment, I have the code below in place, howevewr it's only showing the first day 7 times and traffic for the first day 7 times. XDocument xDocument = XDocument.Parse(e.Result); var values = from query in xDocument.Descendants("usagelist") select new History { day = query.Element("usage").Attribute("day").Value, traffic = query.Element("usage").Element("traffic").Value }; foreach (History history in values) { ObservableCollection<LineGraphItem> Data = new ObservableCollection<LineGraphItem>() { new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, new LineGraphItem() { yyyymmdd = history.day, value = double.Parse(history.traffic) }, }; lineGraph1.DataSource = Data; }

    Read the article

  • Mixing C and C++, raw pointers and (boost) shared pointers

    - by oompahloompah
    I am working in C++ with some legacy C code. I have a data structure that (during initialisation), makes a copy of the structure pointed to a ptr passed to its initialisation pointer. Here is a simplification of what I am trying to do - hopefully, no important detail has been lost in the "simplification": /* C code */ typedef struct MyData { double * elems; unsigned int len; }; int NEW_mydata(MyData* data, unsigned int len) { // no error checking data->elems = (double *)calloc(len, sizeof(double)); return 0; } typedef struct Foo { MyData data data_; }; void InitFoo(Foo * foo, const MyData * the_data) { //alloc mem etc ... then assign the STRUCTURE foo.data_ = *thedata ; } C++ code ------------- typedef boost::shared_ptr<MyData> MyDataPtr; typedef std::map<std::string, MyDataPtr> Datamap; class FooWrapper { public: FooWrapper(const std::string& key) { MyDataPtr mdp = dmap[key]; InitFoo(&m_foo, const_cast<MyData*>((*mdp.get()))); } ~FooWrapper(); double get_element(unsigned int index ) const { return m_foo.elems[index]; } private: // non copyable, non-assignable FooWrapper(const FooWrapper&); FooWrapper& operator= (const FooWrapper&); Foo m_foo; }; int main(int argc, char *argv[]) { MyData data1, data2; Datamap dmap; NEW_mydata(&data1, 10); data1->elems[0] = static_cast<double>(22/7); NEW_mydata(&data2, 42); data2->elems[0] = static_cast<double>(13/21); boost::shared_ptr d1(&data1), d2(&data2); dmap["data1"] = d1; dmap["data2"] = d2; FooWrapper fw("data1"); //expect 22/7, get something else (random number?) double ret fw.get_element(0); } Essentially, what I want to know is this: Is there any reason why the data retrieved from the map is different from the one stored in the map?

    Read the article

  • How to use enumeration types in C++? Apply within example.

    - by Sagistic
    I do not understand how to use enumeration types. I understand what they are, but I don't quite get their purpose. I have made a program that inputs three sides of a triangle and outputs whether or not they are isosceles, scalene, or equilateral. I'm suppose to incorporate the enumeration type somewhere, but don't get where and how to use them. Any help would be appreciated. // h8p466x1.cpp : Defines the entry point for the console application. // include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { return 0; } include using namespace std; enum triangleType {scalene, isosceles, equilateral, noTriangle}; void triangleShape(double x, double y, double z); int main() { double x, y, z; cout << "Please enter the three sides of a triangle:" << endl; cout << "Enter side 1: "; cin >> x; cout << endl; cout << "Enter side 2: "; cin >> y; cout << endl; cout << "Enter side 3: "; cin >> z; cout << endl; triangleShape(x, y, z); return 0; } void triangleShape(double x, double y, double z) { if (((x+y) z) && ((x+z) y) && ((y+z) x)) { cout << "You have a triangle!" << endl; if (x == y && y == z) cout << "Your triangle is an equilateral" << endl; else if (x == y || x == z || y == z) cout << "Your triangle is an isosceles" << endl; else cout << "Your triangle is a scalene" << endl; } else if ((x+y) <= z || ((x+z) <= y) || ((y+z) <= x)) cout << "You do not have a triangle." << endl; }

    Read the article

  • How to use enumeration types in C++?

    - by Sagistic
    I do not understand how to use enumeration types. I understand what they are, but I don't quite get their purpose. I have made a program that inputs three sides of a triangle and outputs whether or not they are isosceles, scalene, or equilateral. I'm suppose to incorporate the enumeration type somewhere, but don't get where and how to use them. Any help would be appreciated. #include <iostream> using namespace std; enum triangleType {scalene, isosceles, equilateral, noTriangle}; void triangleShape(double x, double y, double z); int main() { double x, y, z; cout << "Please enter the three sides of a triangle:" << endl; cout << "Enter side 1: "; cin >> x; cout << endl; cout << "Enter side 2: "; cin >> y; cout << endl; cout << "Enter side 3: "; cin >> z; cout << endl; triangleShape(x, y, z); return 0; } void triangleShape(double x, double y, double z) { if (((x+y) > z) && ((x+z) > y) && ((y+z) > x)) { cout << "You have a triangle!" << endl; if (x == y && y == z) cout << "Your triangle is an equilateral" << endl; else if (x == y || x == z || y == z) cout << "Your triangle is an isosceles" << endl; else cout << "Your triangle is a scalene" << endl; } else if ((x+y) <= z || ((x+z) <= y) || ((y+z) <= x)) cout << "You do not have a triangle." << endl; }

    Read the article

  • Build problems when adding `__str__` method to Boost Python C++ class

    - by Rickard
    I have started to play around with boost python a bit and ran into a problem. I tried to expose a C++ class to python which posed no problems. But I can't seem to manage to implement the __str__ functionality for the class without getting build errors I don't understand. I'm using boost 1_42 prebuild by boostpro. I build the library using cmake and the vs2010 compiler. I have a very simple setup. The header-file (tutorial.h) looks like the following: #include <iostream> namespace TestBoostPython{ class TestClass { private: double m_x; public: TestClass(double x); double Get_x() const; void Set_x(double x); }; std::ostream &operator<<(std::ostream &ostr, const TestClass &ts); }; and the corresponding cpp-file looks like: #include <boost/python.hpp> #include "tutorial.h" using namespace TestBoostPython; TestClass::TestClass(double x) { m_x = x; } double TestClass::Get_x() const { return m_x; } void TestClass::Set_x(double x) { m_x = x; } std::ostream &operator<<(std::ostream &ostr, TestClass &ts) { ostr << ts.Get_x() << "\n"; return ostr; } BOOST_PYTHON_MODULE(testme) { using namespace boost::python; class_<TestClass>("TestClass", init<double>()) .add_property("x", &TestClass::Get_x, &TestClass::Set_x) .def(str(self)) ; } The CMakeLists.txt looks like the following: CMAKE_MINIMUM_REQUIRED(VERSION 2.8) project (testme) FIND_PACKAGE( Boost REQUIRED ) FIND_PACKAGE( Boost COMPONENTS python REQUIRED ) FIND_PACKAGE( PythonLibs REQUIRED ) set(Boost_USE_STATIC_LIBS OFF) set(Boost_USE_MULTITHREAD ON) INCLUDE_DIRECTORIES(${Boost_INCLUDE_DIRS}) INCLUDE_DIRECTORIES ( ${PYTHON_INCLUDE_PATH} ) add_library(testme SHARED tutorial.cpp) target_link_libraries(testme ${Boost_PYTHON_LIBRARY}) target_link_libraries(testme ${PYTHON_LIBRARY} The build error I get is the following: Compiling... tutorial.cpp C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(31) : error C2780: 'void boost::python::api::object_operators::visit(ClassT &,const char *,const boost::python::detail::def_helper &) const' : expects 3 arguments - 1 provided with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/object_core.hpp(203) : see declaration of 'boost::python::api::object_operators::visit' with [ U=boost::python::api::object ] C:\Program Files (x86)\boost\boost_1_42\boost/python/def_visitor.hpp(67) : see reference to function template instantiation 'void boost::python::def_visitor_access::visit,classT>(const V &,classT &)' being compiled with [ DerivedVisitor=boost::python::api::object, classT=boost::python::class_, V=boost::python::def_visitor ] C:\Program Files (x86)\boost\boost_1_42\boost/python/class.hpp(225) : see reference to function template instantiation 'void boost::python::def_visitor::visit>(classT &) const' being compiled with [ DerivedVisitor=boost::python::api::object, W=TestBoostPython::TestClass, classT=boost::python::class_ ] .\tutorial.cpp(29) : see reference to function template instantiation 'boost::python::class_ &boost::python::class_::def(const boost::python::def_visitor &)' being compiled with [ W=TestBoostPython::TestClass, U=boost::python::api::object, DerivedVisitor=boost::python::api::object ] Does anyone have any idea on what went wrrong? If I remove the .def(str(self)) part from the wrapper code, everything compiles fine and the class is usable from python. I'd be very greatful for assistance. Thank you, Rickard

    Read the article

  • MSChart on ASP.NET MVC 2

    - by Adron
    I upgraded my MVC Application using MSChart to MVC 2 and have ended up with broken image links for the charts. See my blog entry here: http://blog.adronbhall.com/post/2010/04/12/MVC-2-Breaks-my-Charts.aspx I get no build errors anymore, and have completed the following steps. First, I setup the following web.config lines. add tagPrefix="asp" namespace="System.Web.UI.DataVisualization.Charting" assembly="System.Web.DataVisualization, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" and add path="ChartImg.axd" verb="GET,HEAD" type="System.Web.UI.DataVisualization.Charting.ChartHttpHandler, System.Web.DataVisualization, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" validate="false" (NOTE: I took the chevrons off so the lines would appear) The next thing I did was create this page with the following code. Which should, according to it working in MVC<1, showed 4 charts. <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> <%@ Import Namespace="Scorecard.Views" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Scorecard </asp:Content> <asp:Content ID="applicationTitle" ContentPlaceHolderID="ContentPlaceHolderApplicationName" runat="server"> <%=Html.Encode(ViewData["ApplicationTitle"])%> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <form id="form1" runat="server"> <h2> Web Analysis Scorecard </h2> <table> <tr> <td> <% ChartHelper chartHelper = new ChartHelper("Top Countries", (double[])ViewData["TopCountryCounts"], (string[])ViewData["TopCountries"], SeriesChartType.Pie); Chart chartPieTwo = chartHelper.ResultingChart; // Explode data point with label "USA" chartPieTwo.Series["DefaultSeries"].Points[3]["Exploded"] = "true"; chartHelper.RenderChart(this); %> </td> <td> <% chartHelper = new ChartHelper("View Cart Trend", (double[])ViewData["LineValues"], (string[])ViewData["TopEngines"], SeriesChartType.Line); chartHelper.RenderChart(this); %> </td> </tr> <tr> <td> <% chartHelper = new ChartHelper("Yesterday's Page Views", (double[])ViewData["ColumnStats"], (string[])ViewData["ColumnStatHeaders"], SeriesChartType.Column); chartHelper.RenderChart(this); %> </td> <td> <% double[] theValues = (double[])ViewData["ColumnStats"]; double[] newValues = new double[] { 0, 0, 0, 0 }; int count = 0; int daysInMonth = DateTime.DaysInMonth(DateTime.Now.Year, DateTime.Now.Month); foreach (double d in theValues) { newValues[count] += d * daysInMonth; count++; } chartHelper = new ChartHelper("Current Month Page Views", newValues, (string[])ViewData["ColumnStatHeaders"], SeriesChartType.Bar); chartHelper.RenderChart(this); %> </td> </tr> </table> </form>

    Read the article

  • Comments show up in database, but only show up on my index page after a refresh.

    - by Truong
    Hi, I have AJAX, PHP, jquery, and mySQL in this very simple website I'm trying to make. All there is is a text area that sends data to the database and uses ajax\jquery to display that data onto the index page. For some reason though, I press submit and the data goes to the database, but I have to refresh the page myself to see that data on the page. I'm assuming that the problem has to do with my AJAX JQuery or even some mistake in the index. Also, when I type the text into the text area and press submit, the text remains in the textarea until I refresh the page. Haha, sorry if this is such a noob question.. I'm trying to learn. Thanks so much Here is the AJAX: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.0/jquery.min.js"></script> <script type="text/javascript"> $(function() { $(".submit").click(function() { var comment = $("#comment").val(); var post_id = $("#post").val(); var dataString = '&comment=' + comment if(comment=='') { alert('Fill something in please!'); } else { $("#flash").show(); $("#flash").fadeIn(400).html('<img src="noworries.jpg" /> '); $.ajax({ type: "POST", url: "commentajax.php", data: dataString, cache: false, success: function(html){ $("ol#update").append(html); $("ol#update li:last").fadeIn("slow"); $("#flash").hide(); } }); }return false; }); }); </script> Here is the index\form area: <body> <div id="container"><img src="banner.jpg" width="890" height="150" alt="title" /></div> <id="update" class="timeline"> <div id="flash"></div> <div id="container"> <form action="#" method="post"> <textarea name="comment" id="comment" cols="35" rows="4"></textarea><br /> <input name="submit" type="submit" class="submit" id="submit" value=" Submit Comment " /><br /> </form> </div> <id="update" class="timeline"> <?php include('config.php'); //$post_id value comes from the POSTS table $prefix="I'm happy when"; $sql=mysql_query("select * from comments order by com_id desc"); while($row=mysql_fetch_array($sql)) { $comment=$row['com_dis']; ?> <!--Displaying comments--> <div id="container"> <class="box"> <?php echo "$prefix $comment"; ?> </div> <?php } ?> Here is my commentajax.php <?php include('config.php'); if($_POST) { $comment=$_POST['comment']; $comment=mysql_real_escape_string($comment); mysql_query("INSERT INTO comments(com_id,com_dis) VALUES ('NULL', '$comment')"); } ?> <li class="box"><br /> <?php echo $comment; ?> </li> I'm sorry for so much code but I just started learning this four days ago and this is probably one of the last bugs until the website is functional.

    Read the article

  • curl multipart/form-data help

    - by user253530
    Hi am trying to post some data on a website using CURL. The posting process has 3 steps. 1. enter a URL, submit and get to the 2nd step with some fields already completed 2. submit again, after you entered some more data and preview the form. 3. submit the final data. The problem is that after the second step, the form data looks like this POSTDATA =-----------------------------12249266671528 Content-Disposition: form-data; name="title" Filme 2010, filme 2009, filme noi, programe TV, program cinema, premiere cinema, trailere filme - CineMagia.ro -----------------------------12249266671528 Content-Disposition: form-data; name="category" 3 -----------------------------12249266671528 Content-Disposition: form-data; name="tags" filme, programe tv, program cinema -----------------------------12249266671528 Content-Disposition: form-data; name="bodytext" Filme 2010, filme 2009, filme noi, programe TV, program cinema, premiere cinema, trailere filme -----------------------------12249266671528 Content-Disposition: form-data; name="trackback" -----------------------------12249266671528 Content-Disposition: form-data; name="url" http://cinemagia.ro -----------------------------12249266671528 Content-Disposition: form-data; name="phase" 2 -----------------------------12249266671528 Content-Disposition: form-data; name="randkey" 9510520 -----------------------------12249266671528 Content-Disposition: form-data; name="id" 17753 -----------------------------12249266671528-- I am stuck trying to devise an algorithm that will generate this kind of POST data for the second step. Just to mention the URL of the form never changes. It is always: http://www.xxx.com/submit. There is only a hidden input called "phase" that changes according to the step i am currently on (phase = 1, phase = 2, phase = 3). Any help, be it either code, pseudo-code or just guidance would be greatly appreciated. My code so far: function postBlvsocialbookmarkingcom($curl,$vars) { extract($vars); $baseUrl = "http://www.blv-socialbookmarking.com/"; //step 1: login $curl->setRedirect(); $page = $curl->post ($baseUrl.'login.php?return=/index.php', array ('username' => $username, 'password' => $password, 'processlogin' => '1', 'return' => '/index.php')); if ($err = $curl->getError ()) { return $err; } //post step 1---- //get random key $page = $curl->post($baseUrl.'/submit', array()); $randomKey = explode('<input type="hidden" name="randkey" value="',$page); $randKey = explode('"',$randomKey[1]); //------------------------------------- $page = $curl->post($baseUrl.'/submit', array('url'=>$address,'phase'=>'1','randkey'=>$randKey[0],'id'=>'c_1')); if ($err = $curl->getError ()) { return $err; } //echo $page; // //post step 2 $page = $curl->post ($baseUrl.'/submit', array ('title' => $title, 'category'=>'1', 'tags' => $tags, 'bodytext' => $description, 'phase' => '2')); if ($err = $curl->getError ()) { return $err; } echo $page; //post step 3 $page = $curl->post ($baseUrl.'/submit', array ('phase' => '3')); if ($err = $curl->getError ()) { return $err; } echo $page; }

    Read the article

  • What causes Windows Media Player on Windows 8 to not play the entire library?

    - by somequixotic
    Behavior 1: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). A random song from any track in the Library is randomly chosen and played. When observing the Playlist (clicking the "Play" tab), the entire contents of the Library appears in the Playlist. Behavior 2: Verify that the WMP playlist is clear of all songs. Turn on the "Shuffle" and "Repeat" features. Double-click on a music track in the Library. Click the "Next" button (double right angle brackets). The button visually depresses like it has registered the click, but nothing happens. Absolutely nothing. Moreover, the "Previous" button is grayed out. When observing the Playlist, only the one song that was double-clicked appears in the Playlist. What causes Behavior 2? I cannot correlate any specific action I've taken with Behavior 2, and Behavior 1 has been the case as long as I can remember, all the way back to Windows XP. Even earlier during my usage of Windows 8, I recall Behavior 1 working correctly. But suddenly, inexplicably, without changing any settings in WMP, Behavior 2 kicked in, and persists after reboots. I've tried sfc /scannow in an administrator prompt. All system files are in order. I've downloaded all Windows Updates and driver updates. I've attempted to alter WMP options and playback settings to no avail. So... what is causing Behavior 2? Is this an intended, valid behavior, or is something malfunctioning? How would I know what that "something" is? How would I go about fixing it without just reinstalling Windows 8 fresh?

    Read the article

  • using svnadmin in a php script

    - by fabjoa
    Howdie Scenario: Allow developers to submit new application packages to a market server. Developers run a bash script which contains a cURL call to market server (localhost/market/submit/$app-name). The submit script on the server creates a new folder in existing svn server with the name of the submitted app. Script on dev side waits for HTTP to issue a success message and then do a svn checkout in dev local machine. Problem: The submit script on the market server failed to create new svn directory through code: echo `svnadmin mkdir -m 'added new package $package' http://localhost/market/packages/$package`; this does not echo nothing and when I go on http://localhost/market/packages, the folder has not been added and the revision number has not been incremented. I've tried from a terminal in market server chown root:www-data /usr/bin/svnadmin but still no luck. Somebody has come acrosss similar problem? Any solutions? Thanks! Profile: Linux/Ubuntu, apache subversion

    Read the article

  • n68 s-ucc front panel audio

    - by user264522
    I have a motherboard asrock n68 s-ucc, and i need a hand identifying which pins on the board to connect my front audio header connectors to. On the connectors I have: double connector: GND and MIC IN double connector: LINE OUT FL and LINE OUT RL double connector: LINE OUT FR and LINE OUT RR simple connector: MIC POWER On the motherboard I have: GND PRESENCE# MIC_RET OUT_RET MIC2_L MIC2_R OUT2_R J_SENSE OUT2_L manual of motherboard and my front panel Thanks so much to all

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Windows file association for README, INSTALL, LICENSE and the like [closed]

    - by Lumi
    Possible Duplicate: How to set the default program for opening files without an extension in Windows? Many files originating in the UNIX world come without file extension. Popular examples include README, INSTALL, LICENSE. We know for a fact that these are text files. It is therefore a bit disappointing not to be able to just double-click them open in Explorer and see them in Notepad (actually, Notepad2 because of the UNIX line endings which silly Microsoft Notepad doesn't render correctly). Does anyone know of a way to create a file association for, say, README files without extension? This could then be replicated to cover the most frequently occurring file types, and then double-clicking them open would work. Update (Sort of in response to all your comments.) Thanks, folks, your comments and answers have helped me. @Indrek, yes, I was under the assumption that you could somehow create an association for just README or Makefile, and couldn't do so for files without extension. Turns out the contrary is true, and yes, that is a workaround that neatly solves the issue. Ultimately, I just want to be able to double-click to open a README or Makefile, that's all. @Sampo, the SendMe trick is also useful, although usability is not as great as a straight double-click. (I'm really lazy sometimes.) Turns out the following trick using ftype and ftype from an Administrator prompt does the double-click enabling job: assoc .=no_ext ftype no_ext=%SystemRoot%\system32\NOTEPAD.EXE %1 :: You can see it created some entries in the registry: reg query hkcr\no_ext /s reg query hkcr\. /s

    Read the article

  • Looking for advice on Hyper-v storage replication

    - by Notre1
    I am designing a 2-host Hyper-V R2 cluster with 6-10 guests stored on a SMB iSCSI SAN device (probably Promise VessRAID). I will be getting at least two of the SAN devices and need to eliminate the storage a single point of failure. Ideally, that would involve real-time failover for the storage, like the Windows failover clustering does for the hosts. This design will be used at around six of our sites, and I would like to allow for us to eventually setup a cluster at colocation site and replicate each site's VMs there for DR. (Ideally a live multi-site cluster, but a manual import of the VMs would be fine for this sort of DR.) The tools that come with enterprise SANs, like EMC and NetApp, seem to be the most commonly used items for a Hyper-V cluster, but I can't afford their prices with my budget. Outside of them, the two tools that seem to be most common for Hyper-V storage replication are SteelEye (now SIOS) DataKeeper Cluster Edition and Double-Take Availability. Originally, I was planning on using Clustered Shared Volume(s) (CSV), but it seems like replication support for these is either not available or brand new in both these products. It looks like CSVs are supported in Double-Take 5.22, see this discussion, but I don't think I want to run something that new in production. Right now, it seems like the best option for me is not to implement CSVs, implement some sort of storage replication, and upgrade to CSVs at a later date once replicating them is more mature. I would love to have live migration, and CSVs are not required for live migration if you are using one LUN per VM, so I guess this is what I'll do. I would prefer to stick to the using the Microsoft Windows Server and Hyper-V tools and features as much as possible. From that standpoint, SteelEye looks more appealing than Double-Take because they make the DataKeeper volume(s) available to the Failover Clustering Manager and then failover clustering is all configured and managed through the native Microsoft tools. Double-Take says that "clustered Hyper-V hosts are not supported," and Double-Take Availability itself seems to be what is used for the actual clustering and failover. Does anyone know if any of these replication tools work with more than two hosts in the cluster? All the information I can find on the web only uses two hosts in their examples. Are there any better tools than SteelEye and Double-Take for doing what I am trying to do, which is eliminate the storage as as single point of failure? Neverfail, AppAssure, and DataCore all seem to offer similar functionality, but they don't seems to be as popular as SteelEye and Double-Take. I have seen a number of people suggest using Starwind iSCSI SAN software for the shared storage, which includes replication (and CSV replication at that). There are a couple of reasons I have not seriously considered this route: 1) The company I work for is exclusively a Dell shop and Dell does not have any servers with that I can pack with more than six 3.5" SATA drives. 2) In the future, it could be advantegous for us to not be locked into a particular brand or type of storage and third-party replication softwares all allow replication to heterogeneous storage devices. I am pretty new to iSCSI and clustering, so please let me know if it looks like I am planning something that goes against best practices or overlooking/missing something.

    Read the article

  • Mac OS 10.7 DMG files don't pop up anymore?

    - by Sosukodo
    Has anybody noticed that double-clicking on a DMG file no longer raises the mounted image to the front of all windows? It used to be that when you double-clicked a dmg file, it would pop-up but now you have to click Finder and click on the mounted image. Is this a bug, by design or a setting that I can change? Update: As per Daniel Beck's suggestion, I created a new account and downloaded the most recent version of Firefox via Safari. It still exhibited the same behavior. However, I noticed that if I double-click thew DMG from within the Downloads folder in Finder, it does pop up. But when I double-click the DMG within Safari (and Firefox also) it does not pop up over all other windows.

    Read the article

  • Call for Abstracts Now Open for Microsoft ASP.NET Connections (Closing April 26)

    - by plitwin
    We are putting out a call for abstracts to present at the Fall 2010 Microsoft ASP.NET Connections conference in Las Vegas, Nov 9-13 2009. The due date for submissions is April 26, 2010. For submitting sessions, please use this URL: http://www.deeptraining.com/devconnections/abstracts Please keep the abstracts under 200 words each and in one paragraph. No bulleted items and line breaks, and please use a spell-checker. Do not email abstracts, you need to use the web-based tool to submit them. Please submit at least 3 abstracts, but it would help your chances of being selected if you submitted 5 or more abstracts. Also, you are encouraged to suggest all-day pre or post conference workshops as well. We need to finalize the conference content and the tracks layout in just a few short weeks, so we need your abstracts by April 26th. No exceptions will be granted on late submissions! Topics of interest include (but are not limited to):* ASP.NET Webforms* ASP.NET AJAX* ASP.NET MVC* Dynamic Data* Anything else related to ASP.NET For Fall 2010, we are having a seperate Silverlight conference where you can submit abstracts for Silverlight and Windows 7 Phone Development. In fact, you can use the same URL to submit sessions to Microsoft ASP.NET Connections, Silverlight Connections, Visual Studio Connections, or SQL Server Connections. The URL again is:http://www.deeptraining.com/devconnections/abstracts Please realize that while we want a lot of the new and the cool, it's also okay to propose sessions on the more mundane "real world" stuff as it pertains to ASP.NET. What you will get if selected:* $500 per regular conference talk.* Compensation for full-day workshops ranges from $500 for 1-20 attendees to $2500 for 200+ attendees.* Coach airfare and hotel stay paid by the conference.* Free admission to all of the co-located conferences* Speaker party* The adoration of attendees* etc. Your continued suport of Microsoft ASP.NET Connections and the other DevConnections conferences is appreciated. Good luck and thank you,Paul LitwinMicrosoft ASP.NET Conference Chair

    Read the article

  • LINQ and Aggregate function

    - by vik20000in
    LINQ also provides with itself important aggregate function. Aggregate function are function that are applied over a sequence like and return only one value like Average, count, sum, Maximum etc…Below are some of the Aggregate functions provided with LINQ and example of their implementation. Count     int[] primeFactorsOf300 = { 2, 2, 3, 5, 5 };     int uniqueFactors = primeFactorsOf300.Distinct().Count();The below example provided count for only odd number.     int[] primeFactorsOf300 = { 2, 2, 3, 5, 5 };     int uniqueFactors = primeFactorsOf300.Distinct().Count(n => n%2 = 1);  Sum     int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 };        double numSum = numbers.Sum();  Minimum      int minNum = numbers.Min(); Maximum      int maxNum = numbers.Max();Average      double averageNum = numbers.Average();  Aggregate      double[] doubles = { 1.7, 2.3, 1.9, 4.1, 2.9 };     double product = doubles.Aggregate((runningProduct, nextFactor) => runningProduct * nextFactor);  Vikram

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >