Search Results

Search found 9495 results on 380 pages for 'double pointer'.

Page 143/380 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • OSX Server 3, Mac clients binding to OD and Profile Manager failing

    - by dbf
    I've made a setup containing a Mac Mini with OSX Server 3 (Mavericks 10.9.2) using Open Directory and Profile Manager (Mail, etc all set up and working). Now the thing is, internally on the local network, everything works great. Clients can bind to the OD and the users are able to login. I can install trust and settings profiles (either custom or group profiles) and all services in the profiles mentioned are being configured correctly. I can log in and out, hump around and do it a 100 times on different macs with different users, it works. My goal is to make this service publicly. The domain is with a FQDN which I own, for simplicity let's say server.domain.com. Now the only way for me to bind the clients to the OD is using LDAP mapping RCF2307 (without SSL) and a DN suffix of dc=server,dc=domain,dc=com using the Directory Utility. The options from server, or open directory will throw several errors like Connection failed to node '/LDAPv3/server.domain.com (2100). First of all I don't really understand the problem why clients can't bind to the OD like it does locally, with and without SSL (all ports are open, literally all ports are open, not just 389,636 and 1640, wasn't sure if I was missing any). When the clients are using LDAP mapping RFC2307 to bind (without SSL only), clients are able to authenticate, login and even load the Trust profile. But every Settings profile will fail with a Debug Message: Unable to find GUID in user record OD or fail to install saying missing user identification. Is there any way to get this to work without RFC2307? Because there is quite some stuff missing when using RFC2307 and not pull the mapping from the server or use open directory. Is this setup even possible? Or should I use VPN to authenticate with the OD? The network setup is a Modem/Router (DHCP off) with WAN NATted to an Airport Extreme (Using DHCP+NAT). The AE does notify with a double NAT message but I haven't had any problems with it on any other service. So WAN - 192.168.2.220 (static), AE - 10.0.1.* (dhcp) Output of DIG from the outside using dig server.domain.com ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 77 IN A 91.50.*.* (valid WAN IP) ;; SERVER 172.*.*.1#53(172.*.*.1) (iPhone) DIG locally from a client and server (same output) ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 10800 IN A 10.0.1.11 ;; AUTHORITY SECTION: server.domain.com. 10800 IN NS domain.com. (used for email send in relay) server.domain.com. 10800 IN NS server.domain.com. ;; SERVER 10.0.1.11#53(10.0.1.11) Are there any things I should check? Only have OSX. -- double NAT issue, plugged in the server directly on the Modem/Router with a static IP and issue remains. Guess that rules out the double NAT thing. -- changeip -checkhostname comes with There is nothing to change, e.g. success. Primary address = 10.0.1.11 Current HostName = server.domain.com DNS HostName = server.domain.com For now, I've made a workaround by using an admin account that forces a permanent VPN connection on boot. That means before it comes to the login, a connection is already made or underway. I will continue this post when I have more time, also locating all the necessary .log files of each application involved. I have some suspicions but have to debug a bit more when I have more time on my hands .. Unless, of course, I get sidetracked with having a life. Which is arguably not very likely. krypted.com

    Read the article

  • Problem with memset after an instance of a user defined class is created and a file is opened

    - by Liberalkid
    I'm having a weird problem with memset, that was something to do with a class I'm creating before it and a file I'm opening in the constructor. The class I'm working with normally reads in an array and transforms it into another array, but that's not important. The class I'm working with is: #include <vector> #include <algorithm> using namespace std; class PreProcess { public: PreProcess(char* fileName,char* outFileName); void SortedOrder(); private: vector< vector<double > > matrix; void SortRow(vector<double> &row); char* newFileName; vector< pair<double,int> > rowSorted; }; The other functions aren't important, because I've stopped calling them and the problem persists. Essentially I've narrowed it down to my constructor: PreProcess::PreProcess(char* fileName,char* outFileName):newFileName(outFileName){ ifstream input(fileName); input.close(); //this statement is inconsequential } I also read in the file in my constructor, but I've found that the problem persists if I don't read in the matrix and just open the file. Essentially I've narrowed it down to if I comment out those two lines the memset works properly, otherwise it doesn't. Now to the context of the problem I'm having with it: I wrote my own simple wrapper class for matrices. It doesn't have much functionality, I just need 2D arrays in the next part of my project and having a class handle everything makes more sense to me. The header file: #include <iostream> using namespace std; class Matrix{ public: Matrix(int r,int c); int &operator()(int i,int j) {//I know I should check my bounds here return matrix[i*columns+j]; } ~Matrix(); const void Display(); private: int *matrix; const int rows; const int columns; }; Driver: #include "Matrix.h" #include <string> using namespace std; Matrix::Matrix(int r,int c):rows(r),columns(c) { matrix=new int[rows*columns]; memset(matrix,0,sizeof(matrix)); } const void Matrix::Display(){ for(int i=0;i<rows;i++){ for(int j=0;j<columns;j++) cout << (*this)(i,j) << " "; cout << endl; } } Matrix::~Matrix() { delete matrix; } My main program runs: PreProcess test1(argv[1],argv[2]); //test1.SortedOrder(); Matrix test(10,10); test.Display(); And when I run this with the input line uncommented I get: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371727776 32698 -1 0 0 0 0 0 6332656 0 -1 -1 0 0 6332672 0 0 0 0 0 0 0 0 0 0 0 0 0 -1371732704 32698 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 I really don't have a clue what's going on in memory to cause this, on a side note if I replace memset with: for(int i=0;i<rows*columns;i++) *(matrix+i) &= 0x0; Then it works perfectly, it also works if I don't open the file. If it helps I'm running GCC 64-bit version 4.2.4 on Ubuntu.I assume there's some functionality of memset that I'm not properly understanding.

    Read the article

  • Policy-based template design: How to access certain policies of the class?

    - by dehmann
    I have a class that uses several policies that are templated. It is called Dish in the following example. I store many of these Dishes in a vector (using a pointer to simple base class), but then I'd like to extract and use them. But I don't know their exact types. Here is the code; it's a bit long, but really simple: #include <iostream> #include <vector> struct DishBase { int id; DishBase(int i) : id(i) {} }; std::ostream& operator<<(std::ostream& out, const DishBase& d) { out << d.id; return out; } // Policy-based class: template<class Appetizer, class Main, class Dessert> class Dish : public DishBase { Appetizer appetizer_; Main main_; Dessert dessert_; public: Dish(int id) : DishBase(id) {} const Appetizer& get_appetizer() { return appetizer_; } const Main& get_main() { return main_; } const Dessert& get_dessert() { return dessert_; } }; struct Storage { typedef DishBase* value_type; typedef std::vector<value_type> Container; typedef Container::const_iterator const_iterator; Container container; Storage() { container.push_back(new Dish<int,double,float>(0)); container.push_back(new Dish<double,int,double>(1)); container.push_back(new Dish<int,int,int>(2)); } ~Storage() { // delete objects } const_iterator begin() { return container.begin(); } const_iterator end() { return container.end(); } }; int main() { Storage s; for(Storage::const_iterator it = s.begin(); it != s.end(); ++it){ std::cout << **it << std::endl; std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? } return 0; } The tricky part is here, in the main() function: std::cout << "Dessert: " << *it->get_dessert() << std::endl; // ?? How can I access the dessert? I don't even know the Dessert type (it is templated), let alone the complete type of the object that I'm getting from the storage. This is just a toy example, but I think my code reduces to this. I'd just like to pass those Dish classes around, and different parts of the code will access different parts of it (in the example: its appetizer, main dish, or dessert).

    Read the article

  • Custom Java Swing Meter Control

    - by Tyler
    I'm trying to make a custom swing control that is a meter. The arrow will move up and down. Here is my current code, but I feel I've done it wrong. import java.awt.BasicStroke; import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.LinearGradientPaint; import java.awt.Polygon; import java.awt.Stroke; import java.awt.geom.Point2D; import java.awt.geom.Rectangle2D; import javax.swing.JFrame; import javax.swing.JPanel; public class meter extends JFrame { Stroke drawingStroke = new BasicStroke(2); Rectangle2D rect = new Rectangle2D.Double(105, 50, 40, 200); Double meterPercent = new Double(0.57); public meter() { setTitle("Meter"); setLayout(null); setSize(300, 300); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setLocationRelativeTo(null); setVisible(true); } public void paint(Graphics g) { // Paint Meter Graphics2D g1 = (Graphics2D) g; g1.setStroke(drawingStroke); g1.draw(rect); // Set Meter Colors Point2D start = new Point2D.Float(0, 0); Point2D end = new Point2D.Float(0, this.getHeight()); float[] dist = { 0.1f, 0.5f, 0.9f }; Color[] colors = { Color.green, Color.yellow, Color.red }; LinearGradientPaint p = new LinearGradientPaint(start, end, dist, colors); g1.setPaint(p); g1.fill(rect); // Make a triangle - Arrow on Meter int[] x = new int[3]; int[] y = new int[3]; int n; // count of points // Set Points for Arrow Integer meterArrowHypotenuse = (int) rect.getX(); Integer meterArrowTip = (int) rect.getY() + (int) (rect.getHeight() * (1 - meterPercent)); x[0] = meterArrowHypotenuse - 25; x[1] = meterArrowHypotenuse - 25; x[2] = meterArrowHypotenuse - 5; y[0] = meterArrowTip - 20; // Top Left y[1] = meterArrowTip + 20; // Bottom Left y[2] = meterArrowTip; // Tip of Arrow n = 3; // Number of points, 3 because its a triangle // Draw Arrow Border Polygon myTriShadow = new Polygon(x, y, n); // a triangle g1.setPaint(Color.black); g1.fill(myTriShadow); // Set Points for Arrow Board x[0] = x[0] + 1; x[1] = x[1] + 1; x[2] = x[2] - 2; y[0] = y[0] + 3; y[1] = y[1] - 3; y[2] = y[2]; Robot robot = new Robot(); Color colorMeter = robot.getPixelColor(x[2]+10, y[2]); // Draw Arrow Polygon myTri = new Polygon(x, y, n); // a triangle Color colr = new Color(colorMeter.getRed(), colorMeter.getGreen(), colorMeter.getBlue()); g1.setPaint(colr); g1.fill(myTri); } public static void main(String[] args) { new meter(); } } Thanks for looking.

    Read the article

  • How do I get jqGrid to work using ASP.NET + JSON on the backend?

    - by briandus
    Hi friends, ok, I'm back. I totally simplified my problem to just three simple fields and I'm still stuck on the same line using the addJSONData method. I've been stuck on this for days and no matter how I rework the ajax call, the json string, blah blah blah...I can NOT get this to work! I can't even get it to work as a function when adding one row of data manually. Can anyone PLEASE post a working sample of jqGrid that works with ASP.NET and JSON? Would you please include 2-3 fields (string, integer and date preferably?) I would be happy to see a working sample of jqGrid and just the manual addition of a JSON object using the addJSONData method. Thanks SO MUCH!! If I ever get this working, I will post a full code sample for all the other posting for help from ASP.NET, JSON users stuck on this as well. Again. THANKS!! tbl.addJSONData(objGridData); //err: tbl.addJSONData is not a function!! Here is what Firebug is showing when I receive this message: • objGridData Object total=1 page=1 records=5 rows=[5] ? Page "1" Records "5" Total "1" Rows [Object ID=1 PartnerID=BCN, Object ID=2 PartnerID=BCN, Object ID=3 PartnerID=BCN, 2 more... 0=Object 1=Object 2=Object 3=Object 4=Object] (index) 0 (prop) ID (value) 1 (prop) PartnerID (value) "BCN" (prop) DateTimeInserted (value) Thu May 29 2008 12:08:45 GMT-0700 (Pacific Daylight Time) * There are three more rows Here is the value of the variable tbl (value) 'Table.scroll' <TABLE cellspacing="0" cellpadding="0" border="0" style="width: 245px;" class="scroll grid_htable"><THEAD><TR><TH class="grid_sort grid_resize" style="width: 55px;"><SPAN> </SPAN><DIV id="jqgh_ID" style="cursor: pointer;">ID <IMG src="http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images/sort_desc.gif"/></DIV></TH><TH class="grid_resize" style="width: 90px;"><SPAN> </SPAN><DIV id="jqgh_PartnerID" style="cursor: pointer;">PartnerID </DIV></TH><TH class="grid_resize" style="width: 100px;"><SPAN> </SPAN><DIV id="jqgh_DateTimeInserted" style="cursor: pointer;">DateTimeInserted </DIV></TH></TR></THEAD></TABLE> Here is the complete function: $('table.scroll').jqGrid({ datatype: function(postdata) { mtype: "POST", $.ajax({ url: 'EDI.asmx/GetTestJSONString', type: "POST", contentType: "application/json; charset=utf-8", data: "{}", dataType: "text", //not json . let me try to parse success: function(msg, st) { if (st == "success") { var gridData; //strip of "d:" notation var result = JSON.parse(msg); for (var property in result) { gridData = result[property]; break; } var objGridData = eval("(" + gridData + ")"); //creates an object with visible data and structure var tbl = jQuery('table.scroll')[0]; alert(objGridData.rows[0].PartnerID); //displays the correct data //tbl.addJSONData(objGridData); //error received: addJSONData not a function //error received: addJSONData not a function (This uses eval as shown in the documentation) //tbl.addJSONData(eval("(" + objGridData + ")")); //the line below evaluates fine, creating an object and visible data and structure //var objGridData = eval("(" + gridData + ")"); //BUT, the same thing will not work here //tbl.addJSONData(eval("(" + gridData + ")")); //FIREBUG SHOWS THIS AS THE VALUE OF gridData: // "{"total":"1","page":"1","records":"5","rows":[{"ID":1,"PartnerID":"BCN","DateTimeInserted":new Date(1214412777787)},{"ID":2,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125000)},{"ID":3,"PartnerID":"BCN","DateTimeInserted":new Date(1212088125547)},{"ID":4,"PartnerID":"EHG","DateTimeInserted":new Date(1235603192033)},{"ID":5,"PartnerID":"EMDEON","DateTimeInserted":new Date(1235603192000)}]}" } } }); }, jsonReader: { root: "rows", //arry containing actual data page: "page", //current page total: "total", //total pages for the query records: "records", //total number of records repeatitems: false, id: "ID" //index of the column with the PK in it }, colNames: [ 'ID', 'PartnerID', 'DateTimeInserted' ], colModel: [ { name: 'ID', index: 'ID', width: 55 }, { name: 'PartnerID', index: 'PartnerID', width: 90 }, { name: 'DateTimeInserted', index: 'DateTimeInserted', width: 100}], rowNum: 10, rowList: [10, 20, 30], imgpath: 'http://localhost/DNN5/js/jQuery/jqGrid-3.4.3/themes/sand/images', pager: jQuery('#pager'), sortname: 'ID', viewrecords: true, sortorder: "desc", caption: "TEST Example")};

    Read the article

  • OpenXML sdk Modify a sheet in my Excel document

    - by user465202
    hi! I create an empty template in excel. I would like to open the template and edit the document but I do not know how to change the existing sheet. That's the code: using (SpreadsheetDocument xl = SpreadsheetDocument.Open(filename, true)) { WorkbookPart wbp = xl.WorkbookPart; WorkbookPart workbook = xl.WorkbookPart; // Get the worksheet with the required name. // To be used to match the ID for the required sheet data // because the Sheet class and the SheetData class aren't // linked to each other directly. Sheet s = null; if (wbp.Workbook.Sheets.Elements().Count(nm = nm.Name == sheetName) == 0) { // no such sheet with that name xl.Close(); return; } else { s = (Sheet)wbp.Workbook.Sheets.Elements().Where(nm = nm.Name == sheetName).First(); } WorksheetPart wsp = (WorksheetPart)xl.WorkbookPart.GetPartById(s.Id.Value); Worksheet worksheet = new Worksheet(); SheetData sd = new SheetData(); //SheetData sd = (SheetData)wsp.Worksheet.GetFirstChild(); Stylesheet styleSheet = workbook.WorkbookStylesPart.Stylesheet; //SheetData sheetData = new SheetData(); //build the formatted header style UInt32Value headerFontIndex = util.CreateFont( styleSheet, "Arial", 10, true, System.Drawing.Color.Red); //build the formatted date style UInt32Value dateFontIndex = util.CreateFont( styleSheet, "Arial", 8, true, System.Drawing.Color.Black); //set the background color style UInt32Value headerFillIndex = util.CreateFill( styleSheet, System.Drawing.Color.Black); //create the cell style by combining font/background UInt32Value headerStyleIndex = util.CreateCellFormat( styleSheet, headerFontIndex, headerFillIndex, null); /* * Create a set of basic cell styles for specific formats... * If you are controlling your table then you can simply create the styles you need, * this set of code is still intended to be generic. */ _numberStyleId = util.CreateCellFormat(styleSheet, null, null, UInt32Value.FromUInt32(3)); _doubleStyleId = util.CreateCellFormat(styleSheet, null, null, UInt32Value.FromUInt32(4)); _dateStyleId = util.CreateCellFormat(styleSheet, null, null, UInt32Value.FromUInt32(14)); _textStyleId = util.CreateCellFormat(styleSheet, headerFontIndex, headerFillIndex, null); _percentageStyleId = util.CreateCellFormat(styleSheet, null, null, UInt32Value.FromUInt32(9)); util.AddNumber(xl, sheetName, (UInt32)3, "E", "27", _numberStyleId); util.AddNumber(xl, sheetName, (UInt32)3, "F", "3.6", _doubleStyleId); util.AddNumber(xl, sheetName, (UInt32)5, "L", "5", _percentageStyleId); util.AddText(xl, sheetName, (UInt32)5, "M", "Dario", _textStyleId); util.AddDate(xl, sheetName, (UInt32)3, "J", DateTime.Now, _dateStyleId); util.AddImage(xl, sheetName, imagePath, "Smile", "Smile", 30, 30); util.MergeCells(xl, sheetName, "D12", "F12"); //util.DeleteValueCell(spreadsheet, sheetName, "F", (UInt32)8); txtCellText.Text = util.GetCellValue(xl, sheetName, (UInt32)5, "M"); double number = util.GetCellDoubleValue(xl, sheetName, (UInt32)3, "E"); double numberD = util.GetCellDoubleValue(xl, sheetName, (UInt32)3, "F"); DateTime datee = util.GetCellDateTimeValue(xl, sheetName, (UInt32)3, "J"); //txtDoubleCell.Text = util.GetCellValue(spreadsheet, sheetName, (UInt32)3, "P"); txtPercentualeCell.Text = util.GetCellValue(xl, sheetName, (UInt32)5, "L"); string date = util.GetCellValue(xl, sheetName, (UInt32)3, "J"); double dateD = Convert.ToDouble(date); DateTime dateTime = DateTime.FromOADate(dateD); txtDateCell.Text = dateTime.ToShortDateString(); //worksheet.Append(sd); /* Columns columns = new Columns(); columns.Append(util.CreateColumnData(10, 10, 40)); worksheet.Append(columns); */ SheetProtection sheetProtection1 = new SheetProtection() { Sheet = true, Objects = true, Scenarios = true, SelectLockedCells = true, SelectUnlockedCells = true }; worksheet.Append(sheetProtection1); wsp.Worksheet = worksheet; wsp.Worksheet.Save(); xl.WorkbookPart.Workbook.Save(); xl.Close(); thanks!

    Read the article

  • Database call crashes Android Application

    - by Darren Murtagh
    i am using a Android database and its set up but when i call in within an onClickListener and the app crashes the code i am using is mButton.setOnClickListener( new View.OnClickListener() { public void onClick(View view) { s = WorkoutChoice.this.weight.getText().toString(); s2 = WorkoutChoice.this.height.getText().toString(); int w = Integer.parseInt(s); double h = Double.parseDouble(s2); double BMI = (w/h)/h; t.setText(""+BMI); long id = db.insertTitle("001", ""+days, ""+BMI); Cursor c = db.getAllTitles(); if (c.moveToFirst()) { do { DisplayTitle(c); } while (c.moveToNext()); } } }); and the log cat for when i run it is: 04-01 18:21:54.704: E/global(6333): Deprecated Thread methods are not supported. 04-01 18:21:54.704: E/global(6333): java.lang.UnsupportedOperationException 04-01 18:21:54.704: E/global(6333): at java.lang.VMThread.stop(VMThread.java:85) 04-01 18:21:54.704: E/global(6333): at java.lang.Thread.stop(Thread.java:1391) 04-01 18:21:54.704: E/global(6333): at java.lang.Thread.stop(Thread.java:1356) 04-01 18:21:54.704: E/global(6333): at com.b00348312.workout.Splashscreen$1.run(Splashscreen.java:42) 04-01 18:22:09.444: D/dalvikvm(6333): GC_FOR_MALLOC freed 4221 objects / 252640 bytes in 31ms 04-01 18:22:09.474: I/dalvikvm(6333): Total arena pages for JIT: 11 04-01 18:22:09.574: D/dalvikvm(6333): GC_FOR_MALLOC freed 1304 objects / 302920 bytes in 29ms 04-01 18:22:09.744: D/dalvikvm(6333): GC_FOR_MALLOC freed 2480 objects / 290848 bytes in 33ms 04-01 18:22:10.034: D/dalvikvm(6333): GC_FOR_MALLOC freed 6334 objects / 374152 bytes in 36ms 04-01 18:22:14.344: D/AndroidRuntime(6333): Shutting down VM 04-01 18:22:14.344: W/dalvikvm(6333): threadid=1: thread exiting with uncaught exception (group=0x400259f8) 04-01 18:22:14.364: E/AndroidRuntime(6333): FATAL EXCEPTION: main 04-01 18:22:14.364: E/AndroidRuntime(6333): java.lang.IllegalStateException: database not open 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.database.sqlite.SQLiteDatabase.insertWithOnConflict(SQLiteDatabase.java:1567) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.database.sqlite.SQLiteDatabase.insert(SQLiteDatabase.java:1484) 04-01 18:22:14.364: E/AndroidRuntime(6333): at com.b00348312.workout.DataBaseHelper.insertTitle(DataBaseHelper.java:84) 04-01 18:22:14.364: E/AndroidRuntime(6333): at com.b00348312.workout.WorkoutChoice$3.onClick(WorkoutChoice.java:84) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.view.View.performClick(View.java:2408) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.view.View$PerformClick.run(View.java:8817) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.os.Handler.handleCallback(Handler.java:587) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.os.Handler.dispatchMessage(Handler.java:92) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.os.Looper.loop(Looper.java:144) 04-01 18:22:14.364: E/AndroidRuntime(6333): at android.app.ActivityThread.main(ActivityThread.java:4937) 04-01 18:22:14.364: E/AndroidRuntime(6333): at java.lang.reflect.Method.invokeNative(Native Method) 04-01 18:22:14.364: E/AndroidRuntime(6333): at java.lang.reflect.Method.invoke(Method.java:521) 04-01 18:22:14.364: E/AndroidRuntime(6333): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:858) 04-01 18:22:14.364: E/AndroidRuntime(6333): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:616) 04-01 18:22:14.364: E/AndroidRuntime(6333): at dalvik.system.NativeStart.main(Native Method) i have notice errors when the application opens but i dont no what thet are from. when i take out the statements to do with the database there is no errors and everthign runs smoothly

    Read the article

  • Concurrency and Calendar classes

    - by fbielejec
    I have a thread (class implementing runnable, called AnalyzeTree) organised around a hash map (ConcurrentMap slicesMap). The class goes through the data (called trees here) in the large text file and parses the geographical coordinates from it to the HashMap. The idea is to process one tree at a time and add or grow the values according to the key (which is just a Double value representing time). The relevant part of code looks like this: // grow map entry if key exists if (slicesMap.containsKey(sliceTime)) { double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); slicesMap.get(sliceTime).add( new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); // start new entry if no such key in the map } else { List<Coordinates> coords = new ArrayList<Coordinates>(); double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); coords.add(new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); slicesMap.putIfAbsent(sliceTime, coords); // slicesMap.put(sliceTime, coords); }// END: key check And the class is called like this (executor is ExecutorService executor = Executors.newFixedThreadPool(NTHREDS) ): mrsd = new SpreadDate(mrsdString); int readTrees = 1; while (treesImporter.hasTree()) { currentTree = (RootedTree) treesImporter.importNextTree(); executor.submit(new AnalyzeTree(currentTree, precisionString, coordinatesName, rateString, numberOfIntervals, treeRootHeight, timescaler, mrsd, slicesMap, useTrueNoise)); // new AnalyzeTree(currentTree, precisionString, // coordinatesName, rateString, numberOfIntervals, // treeRootHeight, timescaler, mrsd, slicesMap, // useTrueNoise).run(); readTrees++; }// END: while has trees Now this is running into troubles when executed in parallel (the commented part running sequentially is fine), I thought it might throw a ConcurrentModificationException, but apparently the problem is in mrsd (instance of SpreadDate object, which is simply a class for date related calculations). The SpreadDate class looks like this: public class SpreadDate { private Calendar cal; private SimpleDateFormat formatter; private Date stringdate; public SpreadDate(String date) throws ParseException { // if no era specified assume current era String line[] = date.split(" "); if (line.length == 1) { StringBuilder properDateStringBuilder = new StringBuilder(); date = properDateStringBuilder.append(date).append(" AD") .toString(); } formatter = new SimpleDateFormat("yyyy-MM-dd G", Locale.US); stringdate = formatter.parse(date); cal = Calendar.getInstance(); } public long plus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, days); return cal.getTimeInMillis(); }// END: plus public long minus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, -days); //line 39 return cal.getTimeInMillis(); }// END: minus public long getTime() { cal.setTime(stringdate); return cal.getTimeInMillis(); }// END: getDate } And the stack trace from when exception is thrown: java.lang.ArrayIndexOutOfBoundsException: 58 at sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:454) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2098) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2013) at java.util.Calendar.setTimeInMillis(Calendar.java:1126) at java.util.GregorianCalendar.add(GregorianCalendar.java:1020) at utils.SpreadDate.minus(SpreadDate.java:39) at templates.AnalyzeTree.run(AnalyzeTree.java:88) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) If a move the part initializing mrsd to the AnalyzeTree class it runs without any problems - however it is not very memory efficient to initialize class each time this thread is running, hence my concerns. How can it be remedied?

    Read the article

  • mysql_fetch_array() problem

    - by Marty
    So I have 3 DB tables that are all identical in every way (data is different) except the name of the table. I did this so I could use one piece of code with a switch like so: function disp_bestof($atts) { extract(shortcode_atts(array( 'topic' => '' ), $atts)); $connect = mysql_connect("localhost","foo","bar"); if (!$connect) { die('Could not connect: ' . mysql_error()); } switch ($topic) { case "attorneys": $bestof_query = "SELECT * FROM attorneys p JOIN (awards a, categories c, awardLevels l) ON (a.id = p.id AND c.id = a.category AND l.id = a.level) ORDER BY a.category, a.level ASC"; $category_query = "SELECT * FROM categories"; $db = mysql_select_db('roanoke_BestOf_TopAttorneys'); $query = mysql_query($bestof_query); $categoryQuery = mysql_query($category_query); break; case "physicians": $bestof_query = "SELECT * FROM physicians p JOIN (awards a, categories c, awardLevels l) ON (a.id = p.id AND c.id = a.category AND l.id = a.level) ORDER BY a.category, a.level ASC"; $category_query = "SELECT * FROM categories"; $db = mysql_select_db('roanoke_BestOf_TopDocs'); $query = mysql_query($bestof_query); $categoryQuery = mysql_query($category_query); break; case "dining": $bestof_query = "SELECT * FROM restaurants p JOIN (awards a, categories c, awardLevels l) ON (a.id = p.id AND c.id = a.category AND l.id = a.level) ORDER BY a.category, a.level ASC"; $category_query = "SELECT * FROM categories"; $db = mysql_select_db('roanoke_BestOf_DiningAwards'); $query = mysql_query($bestof_query); $categoryQuery = mysql_query($category_query); break; default: $bestof_query = "switch on $best did not match required case(s)"; break; } $category = ''; while( $result = mysql_fetch_array($query) ) { if( $result['category'] != $category ) { $category = $result['category']; //echo "<div class\"category\">"; $bestof_content .= "<h2>".$category."</h2>\n"; //echo "<ul>"; Now, this whole thing works PERFECT for the first two cases, but the third one "dining" breaks with this error: Warning: mysql_fetch_assoc(): supplied argument is not a valid MySQL result resource ... on line 78 Line 78 is the while() at the bottom. I have checked and double checked and can't figure what the problem is. Here's the DB structure for 'restaurants': CREATE TABLE `restaurants` ( `id` int(10) NOT NULL auto_increment, `restaurant` varchar(255) default NULL, `address1` varchar(255) default NULL, `address2` varchar(255) default NULL, `city` varchar(255) default NULL, `state` varchar(255) default NULL, `zip` double default NULL, `phone` double default NULL, `URI` varchar(255) default NULL, `neighborhood` varchar(255) default NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM AUTO_INCREMENT=249 DEFAULT CHARSET=utf8 Does anyone see what I'm doing wrong here? I'm passing "dining" to the function and as I said before, the first two cases in the switch work fine. I'm sure it's something stupid...

    Read the article

  • Should I go along with my choice of web hosting company or still search?

    - by Devner
    Hi all, I have been searching for a good website hosting company that can offer me all the services that I need for hosting my PHP & MySQL based website. Now this is a community based website and users will be able to upload pictures, etc. The hosting company that I have in mind, currently lets me do everything... let me use mail(), supports CRON jobs, etc. Of course they are charging about $6/month. Now the only problem with this company is that they have a limit of 50,000 files that can exist within the hosting account at any time. This kind of contradicts their frontpage ad of "UNLIMITED SPACE" on their website. Apart from this, I know of no other reason why I should not go with this hosting company. But my issue is that 50,000 file limit is what I cannot live with, once the users increase in significant number and the files they upload, exceed 50,000 in number. Now since this is a dynamic website and also includes sensitive issues like payments, etc. I am not sure if I should go ahead with this company as I am just starting out and then later switch over to a better hosting company which does not limit me with 50,000 files. If I need to switch over once I host with this company, I will need to take backups of all the files located in my account (jpg, zip, etc.), then upload them to the new host. I am not aware of any tools that can help me in this process. Can you please mention if you know any? I can go ahead with the other companies right now, but their cost is double/triple of the current price and they all sport less features than my current choice. If I pay more, then they are ready to accommodate my higher demands. Unfortunately, the company that I am willing to go with now, does NOT have any other higher/better plans that I can switch to. So that's the really really bad part. So my question(s): Since I am starting out with my website and since the scope of users initially is going to be less/small, should I go ahead with the current choice and then once the demand increases, switch over to a better provider? If yes, how can I transfer my database, especially the jpg files, etc. to the new provider? I don't even know the tools required to backup and restore to another host. (I don't like this idea but still..) Should I go ahead and pay more right now and go with better providers (without knowing if the website is going to do really that well) just for saving myself the trouble of having to take a backup of the 50,000 files and upload to a new host from an old host and just start paying double/triple the price without even knowing if I would receive back the returns as I expected? Backup and Restore in such a bulky numbers is something that I have never done before and hence I am stuck here trying to decide what to do. The price per month is also a considerable factor in my decision. All these web hosting companies say one common thing: It is customers responsibility to backup and restore data and they are not liable for any loss. So no matter what hosting company that I would like to go with, they ask me to take backup via FTP so that I can restore them whenever I want (& it seems to be safer to have the files locally with me). Some are providing tools for backup and some are not and I am not sure how much their backup tools can be trusted considering the disclaimers they have. I have never backed-up and restored 50,000 files from one web host to another, so please, all you experienced people out there, leave your comments and let me know your suggestions so that I can decide. I have spent 2 days fighting with myself trying to decide what to do and finally concluded that this is a double-edged sword and I can't arrive at a satisfactory final decision without involving others suggestions. I believe that someone must be out there who may have had such troublesome decision to make. So all your suggestions to help me make my decision are appreciated. Thank you all.

    Read the article

  • Java algorithm for normalizing audio

    - by Marty Pitt
    I'm trying to normalize an audio file of speech. Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter. I know very little about audio manipulation, beyond what I've learnt from working on this task. Also, my math is embarrassingly weak. I've done some research, and the Xuggle site provides a sample which shows reducing the volume using the following code: (full version here) @Override public void onAudioSamples(IAudioSamplesEvent event) { // get the raw audio byes and adjust it's value ShortBuffer buffer = event.getAudioSamples().getByteBuffer().asShortBuffer(); for (int i = 0; i < buffer.limit(); ++i) buffer.put(i, (short)(buffer.get(i) * mVolume)); super.onAudioSamples(event); } Here, they modify the bytes in getAudioSamples() by a constant of mVolume. Building on this approach, I've attempted a normalisation modifies the bytes in getAudioSamples() to a normalised value, considering the max/min in the file. (See below for details). I have a simple filter to leave "silence" alone (ie., anything below a value). I'm finding that the output file is very noisy (ie., the quality is seriously degraded). I assume that the error is either in my normalisation algorithim, or the way I manipulate the bytes. However, I'm unsure of where to go next. Here's an abridged version of what I'm currently doing. Step 1: Find peaks in file: Reads the full audio file, and finds this highest and lowest values of buffer.get() for all AudioSamples @Override public void onAudioSamples(IAudioSamplesEvent event) { IAudioSamples audioSamples = event.getAudioSamples(); ShortBuffer buffer = audioSamples.getByteBuffer().asShortBuffer(); short min = Short.MAX_VALUE; short max = Short.MIN_VALUE; for (int i = 0; i < buffer.limit(); ++i) { short value = buffer.get(i); min = (short) Math.min(min, value); max = (short) Math.max(max, value); } // assign of min/max ommitted for brevity. super.onAudioSamples(event); } Step 2: Normalize all values: In a loop similar to step1, replace the buffer with normalized values, calling: buffer.put(i, normalize(buffer.get(i)); public short normalize(short value) { if (isBackgroundNoise(value)) return value; short rawMin = // min from step1 short rawMax = // max from step1 short targetRangeMin = 1000; short targetRangeMax = 8000; int abs = Math.abs(value); double a = (abs - rawMin) * (targetRangeMax - targetRangeMin); double b = (rawMax - rawMin); double result = targetRangeMin + ( a/b ); // Copy the sign of value to result. result = Math.copySign(result,value); return (short) result; } Questions: Is this a valid approach for attempting to normalize an audio file? Is my math in normalize() valid? Why would this cause the file to become noisy, where a similar approach in the demo code doesn't?

    Read the article

  • C++ templated factory constructor/de-serialization

    - by KRao
    Hi, I was looking at the boost serialization library, and the intrusive way to provide support for serialization is to define a member function with signature (simplifying): class ToBeSerialized { public: //Define this to support serialization //Notice not virtual function! template<class Archive> void serialize(Archive & ar) {.....} }; Moreover, one way to support serilization of derived class trough base pointers is to use a macro of the type: //No mention to the base class(es) from which Derived_class inherits BOOST_CLASS_EXPORT_GUID(Derived_class, "derived_class") where Derived_class is some class which is inheriting from a base class, say Base_class. Thanks to this macro, it is possible to serialize classes of type Derived_class through pointers to Base_class correctly. The question is: I am used in C++ to write abstract factories implemented through a map from std::string to (pointer to) functions which return objects of the desired type (and eveything is fine thanks to covariant types). Hover I fail to see how I could use the above non-virtual serialize template member function to properly de-serialize (i.e. construct) an object without knowing its type (but assuming that the type information has been stored by the serializer, say in a string). What I would like to do (keeping the same nomenclature as above) is something like the following: XmlArchive xmlArchive; //A type or archive xmlArchive.open("C:/ser.txt"); //Contains type information for the serialized class Base_class* basePtr = Factory<Base_class>::create("derived_class",xmlArchive); with the function on the righ-hand side creating an object on the heap of type Derived_class (via default constructor, this is the part I know how to solve) and calling the serialize function of xmlArchive (here I am stuck!), i.e. do something like: Base_class* Factory<Base_class>::create("derived_class",xmlArchive) { Base_class* basePtr = new Base_class; //OK, doable, usual map string to pointer to function static_cast<Derived_class*>( basePtr )->serialize( xmlArchive ); //De-serialization, how????? return basePtr; } I am sure this can be done (boost serialize does it but its code is impenetrable! :P), but I fail to figure out how. The key problem is that the serialize function is a template function. So I cannot have a pointer to a generic templated function. As the point in writing the templated serialize function is to make the code generic (i.e. not having to re-write the serialize function for different Archivers), it does not make sense then to have to register all the derived classes for all possible archive types, like: MY_CLASS_REGISTER(Derived_class, XmlArchive); MY_CLASS_REGISTER(Derived_class, TxtArchive); ... In fact in my code I relies on overloading to get the correct behaviour: void serialize( XmlArchive& archive, Derived_class& derived ); void serialize( TxtArchive& archive, Derived_class& derived ); ... The key point to keep in mind is that the archive type is always known, i.e. I am never using runtime polymorphism for the archive class...(again I am using overloading on the archive type). Any suggestion to help me out? Thank you very much in advance! Cheers

    Read the article

  • Difficulty creating classes and arrays of those classes C#

    - by Lucifer Fayte
    I'm trying to implement a Discrete Fourier Transformation algorithm for a project I'm doing in school. But creating a class is seeming to be difficult(which it shouldn't be). I'm using Visual Studio 2012. Basically I need a class called Complex to store the two values I get from a DFT; The real portion and the imaginary portion. This is what I have so far for that: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace SoundEditor_V3 { public class Complex { public double real; public double im; public Complex() { real = 0; im = 0; } } } The problem is that it doesn't recognize the constructor as a constructor, now I'm just learning C#, but I looked it up online and this is how it's supposed to look apparently. It recognizes my constructor as a method. Why is that? Am I creating the class wrong? It's doing the same thing for my Fourier class as well. So each time I try to create a Fourier object and then use it's method...there is no such thing. example, I do this: Fourier fou = new Fourier(); fou.DFT(s, N, amp, 0); and it tells me fou is a 'field' but is used like a 'type' why is it saying that? Here is the code for my Fourier class as well: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace SoundEditor_V3 { public class Fourier { //FOURIER //N = number of samples //s is the array of samples(data) //amp is the array where the complex result will be written to //start is the where in the array to start public void DFT(byte[] s, int N, ref Complex[] amp, int start) { Complex tem = new Complex(); int f; int t; for (f = 0; f < N; f++) { tem.real = 0; tem.im = 0; for (t = 0; t < N; t++) { tem.real += s[t + start] * Math.Cos(2 * Math.PI * t * f / N); tem.im -= s[t + start] * Math.Sin(2 * Math.PI * t * f / N); } amp[f].real = tem.real; amp[f].im = tem.im; } } //INVERSE FOURIER public void IDFT(Complex[] A, ref int[] s) { int N = A.Length; int t, f; double result; for (t = 0; t < N; t++) { result = 0; for (f = 0; f < N; f++) { result += A[f].real * Math.Cos(2 * Math.PI * t * f / N) - A[f].im * Math.Sin(2 * Math.PI * t * f / N); } s[t] = (int)Math.Round(result); } } } } I'm very much stuck at the moment, any and all help would be appreciated. Thank you.

    Read the article

  • Java curious Loop Performance

    - by user1680583
    I have a big problem while evaluate my java code. To simplify the problem I wrote the following code which produce the same curious behavior. Important is the method run() and given double value rate. For my runtime test (in the main method) I set the rate to 0.5 one times and 1.0 the other time. With the value 1.0 the if-statement will be executed in each loop iteration and with the value 0.5 the if-statement will be executed half as much. For this reason I expected longer runtime by the first case but opposite is true. Can anybody explain me this phenomenon?? The result of main: Test mit rate = 0.5 Length: 50000000, IF executions: 25000856 Execution time was 4329 ms. Length: 50000000, IF executions: 24999141 Execution time was 4307 ms. Length: 50000000, IF executions: 25001582 Execution time was 4223 ms. Length: 50000000, IF executions: 25000694 Execution time was 4328 ms. Length: 50000000, IF executions: 25004766 Execution time was 4346 ms. ================================= Test mit rate = 1.0 Length: 50000000, IF executions: 50000000 Execution time was 3482 ms. Length: 50000000, IF executions: 50000000 Execution time was 3572 ms. Length: 50000000, IF executions: 50000000 Execution time was 3529 ms. Length: 50000000, IF executions: 50000000 Execution time was 3479 ms. Length: 50000000, IF executions: 50000000 Execution time was 3473 ms. The Code public ArrayList<Byte> list = new ArrayList<Byte>(); public final int LENGTH = 50000000; public PerformanceTest(){ byte[]arr = new byte[LENGTH]; Random random = new Random(); random.nextBytes(arr); for(byte b : arr) list.add(b); } public void run(double rate){ byte b = 0; int count = 0; for (int i = 0; i < LENGTH; i++) { if(getRate(rate)){ list.set(i, b); count++; } } System.out.println("Length: " + LENGTH + ", IF executions: " + count); } public boolean getRate(double rate){ return Math.random() < rate; } public static void main(String[] args) throws InterruptedException { PerformanceTest test = new PerformanceTest(); long start, end; System.out.println("Test mit rate = 0.5"); for (int i = 0; i < 5; i++) { start=System.currentTimeMillis(); test.run(0.5); end = System.currentTimeMillis(); System.out.println("Execution time was "+(end-start)+" ms."); Thread.sleep(500); } System.out.println("================================="); System.out.println("Test mit rate = 1.0"); for (int i = 0; i < 5; i++) { start=System.currentTimeMillis(); test.run(1.0); end = System.currentTimeMillis(); System.out.println("Execution time was "+(end-start)+" ms."); Thread.sleep(500); } }

    Read the article

  • Problem with incomplete type while trying to detect existence of a member function

    - by abir
    I was trying to detect existence of a member function for a class where the function tries to use an incomplete type. The typedef is struct foo; typedef std::allocator<foo> foo_alloc; The detection code is struct has_alloc { template<typename U,U x> struct dummy; template<typename U> static char check(dummy<void* (U::*)(std::size_t),&U::allocate>*); template<typename U> static char (&check(...))[2]; const static bool value = (sizeof(check<foo_alloc>(0)) == 1); }; So far I was using incomplete type foo with std::allocator without any error on VS2008. However when I replaced it with nearly an identical implementation as template<typename T> struct allocator { T* allocate(std::size_t n) { return (T*)operator new (sizeof(T)*n); } }; it gives an error saying that as T is incomplete type it has problem instantiating allocator<foo> because allocate uses sizeof. GCC 4.5 with std::allocator also gives the error, so it seems during detection process the class need to be completely instantiated, even when I am not using that function at all. What I was looking for is void* allocate(std::size_t) which is different from T* allocate(std::size_t). My questions are (I have three questions, but as they are correlated , so I thought it is better not to create three separate questions). Why MS std::allocator doesn't check for incomplete type foo while instantiating? Are they following any trick which can be implemented ? Why the compiler need to instantiate allocator<T> to check the existence of the function when sizeof is not used as sfinae mechanism to remove/add allocate in the overload resolutions set? It should be noted that, if I remove the generic implementation of allocate leaving the declaration only, and specialized it for foo afterwards such as struct foo{}; template< struct allocator { foo* allocate(std::size_t n) { return (foo*)operator new (sizeof(foo)*n); } }; after struct has_alloc it compiles in GCC 4.5 while gives error in VS2008 as allocator<T> is already instantiated and explicit specialization for allocator<foo> already defined. Is it legal to use nested types for an std::allocator of incomplete type such as typedef foo_alloc::pointer foo_pointer; ? Though it is practically working for me, I suspect the nested types such as pointer may depend on completeness of type it takes. It will be good to know if there is any possible way to typedef such types as foo_pointer where the type pointer depends on completeness of foo. NOTE : As the code is not copy paste from editor, it may have some syntax error. Will correct it if I find any. Also the codes (such as allocator) are not complete implementation, I simplified and typed only the portion which I think useful for this particular problem.

    Read the article

  • Critique my heap debugger

    - by FredOverflow
    I wrote the following heap debugger in order to demonstrate memory leaks, double deletes and wrong forms of deletes (i.e. trying to delete an array with delete p instead of delete[] p) to beginning programmers. I would love to get some feedback on that from strong C++ programmers because I have never done this before and I'm sure I've done some stupid mistakes. Thanks! #include <cstdlib> #include <iostream> #include <new> namespace { const int ALIGNMENT = 16; const char* const ERR = "*** ERROR: "; int counter = 0; struct heap_debugger { heap_debugger() { std::cerr << "*** heap debugger started\n"; } ~heap_debugger() { std::cerr << "*** heap debugger shutting down\n"; if (counter > 0) { std::cerr << ERR << "failed to release memory " << counter << " times\n"; } else if (counter < 0) { std::cerr << ERR << (-counter) << " double deletes detected\n"; } } } instance; void* allocate(size_t size, const char* kind_of_memory, size_t token) throw (std::bad_alloc) { void* raw = malloc(size + ALIGNMENT); if (raw == 0) throw std::bad_alloc(); *static_cast<size_t*>(raw) = token; void* payload = static_cast<char*>(raw) + ALIGNMENT; ++counter; std::cerr << "*** allocated " << kind_of_memory << " at " << payload << " (" << size << " bytes)\n"; return payload; } void release(void* payload, const char* kind_of_memory, size_t correct_token, size_t wrong_token) throw () { if (payload == 0) return; std::cerr << "*** releasing " << kind_of_memory << " at " << payload << '\n'; --counter; void* raw = static_cast<char*>(payload) - ALIGNMENT; size_t* token = static_cast<size_t*>(raw); if (*token == correct_token) { *token = 0xDEADBEEF; free(raw); } else if (*token == wrong_token) { *token = 0x177E6A7; std::cerr << ERR << "wrong form of delete\n"; } else { std::cerr << ERR << "double delete\n"; } } } void* operator new(size_t size) throw (std::bad_alloc) { return allocate(size, "non-array memory", 0x5AFE6A8D); } void* operator new[](size_t size) throw (std::bad_alloc) { return allocate(size, " array memory", 0x5AFE6A8E); } void operator delete(void* payload) throw () { release(payload, "non-array memory", 0x5AFE6A8D, 0x5AFE6A8E); } void operator delete[](void* payload) throw () { release(payload, " array memory", 0x5AFE6A8E, 0x5AFE6A8D); }

    Read the article

  • C++ pimpl idiom wastes an instruction vs. C style?

    - by Rob
    (Yes, I know that one machine instruction usually doesn't matter. I'm asking this question because I want to understand the pimpl idiom, and use it in the best possible way; and because sometimes I do care about one machine instruction.) In the sample code below, there are two classes, Thing and OtherThing. Users would include "thing.hh". Thing uses the pimpl idiom to hide it's implementation. OtherThing uses a C style – non-member functions that return and take pointers. This style produces slightly better machine code. I'm wondering: is there a way to use C++ style – ie, make the functions into member functions – and yet still save the machine instruction. I like this style because it doesn't pollute the namespace outside the class. Note: I'm only looking at calling member functions (in this case, calc). I'm not looking at object allocation. Below are the files, commands, and the machine code, on my Mac. thing.hh: class ThingImpl; class Thing { ThingImpl *impl; public: Thing(); int calc(); }; class OtherThing; OtherThing *make_other(); int calc(OtherThing *); thing.cc: #include "thing.hh" struct ThingImpl { int x; }; Thing::Thing() { impl = new ThingImpl; impl->x = 5; } int Thing::calc() { return impl->x + 1; } struct OtherThing { int x; }; OtherThing *make_other() { OtherThing *t = new OtherThing; t->x = 5; } int calc(OtherThing *t) { return t->x + 1; } main.cc (just to test the code actually works...) #include "thing.hh" #include <cstdio> int main() { Thing *t = new Thing; printf("calc: %d\n", t->calc()); OtherThing *t2 = make_other(); printf("calc: %d\n", calc(t2)); } Makefile: all: main thing.o : thing.cc thing.hh g++ -fomit-frame-pointer -O2 -c thing.cc main.o : main.cc thing.hh g++ -fomit-frame-pointer -O2 -c main.cc main: main.o thing.o g++ -O2 -o $@ $^ clean: rm *.o rm main Run make and then look at the machine code. On the mac I use otool -tv thing.o | c++filt. On linux I think it's objdump -d thing.o. Here is the relevant output: Thing::calc(): 0000000000000000 movq (%rdi),%rax 0000000000000003 movl (%rax),%eax 0000000000000005 incl %eax 0000000000000007 ret calc(OtherThing*): 0000000000000010 movl (%rdi),%eax 0000000000000012 incl %eax 0000000000000014 ret Notice the extra instruction because of the pointer indirection. The first function looks up two fields (impl, then x), while the second only needs to get x. What can be done?

    Read the article

  • Rewrite C++ code into Objective C

    - by Phil_M
    Hello I got some C++ Sourcecode that I would like to rewrite into Objective C. It would help me alot if someone could write me a header file for this Code. When I get the Headerfile I would be able to rewrite the rest of the Sourcecode. It would be very nice if someone could help me please. Thanks I will poste the sourcecode here: #include <stdlib.h> #include <iostream.h> #define STATES 5 int transitionTable[STATES][STATES]; // function declarations: double randfloat (void); int chooseNextEventFromTable (int current, int table[STATES][STATES]); int chooseNextEventFromTransitionTablee (int currentState); void setTable (int value, int table[STATES][STATES]); ////////////////////////////////////////////////////////////////////////// int main(void) { int i; // for demo purposes: transitionTable[0][0] = 0; transitionTable[0][1] = 20; transitionTable[0][2] = 30; transitionTable[0][3] = 50; transitionTable[0][4] = 0; transitionTable[1][0] = 35; transitionTable[1][1] = 25; transitionTable[1][2] = 20; transitionTable[1][3] = 30; transitionTable[1][4] = 0; transitionTable[2][0] = 70; transitionTable[2][1] = 0; transitionTable[2][2] = 15; transitionTable[2][3] = 0; transitionTable[2][4] = 15; transitionTable[3][0] = 0; transitionTable[3][1] = 25; transitionTable[3][2] = 25; transitionTable[3][3] = 0; transitionTable[3][4] = 50; transitionTable[4][0] = 13; transitionTable[4][1] = 17; transitionTable[4][2] = 22; transitionTable[4][3] = 48; transitionTable[4][4] = 0; int currentState = 0; for (i=0; i<10; i++) { std::cout << currentState << " "; currentState = chooseNextEventFromTransitionTablee(currentState); } return 0; }; ////////////////////////////////////////////////////////////////////////// ////////////////////////////// // // chooseNextEventFromTransitionTable -- choose the next note. // int chooseNextEventFromTransitionTablee(int currentState) { int targetSum = 0; int sum = 0; int targetNote = 0; int totalevents = 0; int i; currentState = currentState % STATES; // remove any octave value for (i=0; i<STATES; i++) { totalevents += transitionTable[currentState][i]; } targetSum = (int)(randfloat() * totalevents + 0.5); while (targetNote < STATES && sum+transitionTable[currentState][targetNote] < targetSum) { sum += transitionTable[currentState][targetNote]; targetNote++; } return targetNote; } ////////////////////////////// // // randfloat -- returns a random number between 0.0 and 1.0. // double randfloat(void) { return (double)rand()/RAND_MAX; } ////////////////////////////// // // setTable -- set all values in the transition table to the given value. // void setTable(int value, int table[STATES][STATES]) { int i, j; for (i=0; i<STATES; i++) { for (j=0; j<STATES; j++) { table[i][j] = value; } } }

    Read the article

  • 10000's+ UI elements, bind or draw?

    - by jpiccolo
    I am drawing a header for a timeline control. It looks like this: I go to 0.01 millisecond per line, so for a 10 minute timeline I am looking at drawing 60000 lines + 6000 labels. This takes a while, ~10 seconds. I would like to offload this from the UI thread. My code is currently: private void drawHeader() { Header.Children.Clear(); switch (viewLevel) { case ViewLevel.MilliSeconds100: double hWidth = Header.Width; this.drawHeaderLines(new TimeSpan(0, 0, 0, 0, 10), 100, 5, hWidth); //Was looking into background worker to off load UI //backgroundWorker = new BackgroundWorker(); //backgroundWorker.DoWork += delegate(object sender, DoWorkEventArgs args) // { // this.drawHeaderLines(new TimeSpan(0, 0, 0, 0, 10), 100, 5, hWidth); // }; //backgroundWorker.RunWorkerAsync(); break; } } private void drawHeaderLines(TimeSpan timeStep, int majorEveryXLine, int distanceBetweenLines, double headerWidth) { var currentTime = new TimeSpan(0, 0, 0, 0, 0); const int everyXLine100 = 10; double currentX = 0; var currentLine = 0; while (currentX < headerWidth) { var l = new Line { ToolTip = currentTime.ToString(@"hh\:mm\:ss\.fff"), StrokeThickness = 1, X1 = 0, X2 = 0, Y1 = 30, Y2 = 25 }; if (((currentLine % majorEveryXLine) == 0) && currentLine != 0) { l.StrokeThickness = 2; l.Y2 = 15; var textBlock = new TextBlock { Text = l.ToolTip.ToString(), FontSize = 8, FontFamily = new FontFamily("Tahoma"), Foreground = new SolidColorBrush(Color.FromRgb(255, 255, 255)) }; Canvas.SetLeft(textBlock, (currentX - 22)); Canvas.SetTop(textBlock, 0); Header.Children.Add(textBlock); } if ((((currentLine % everyXLine100) == 0) && currentLine != 0) && (currentLine % majorEveryXLine) != 0) { l.Y2 = 20; var textBlock = new TextBlock { Text = string.Format(".{0}", TimeSpan.Parse(l.ToolTip.ToString()).Milliseconds), FontSize = 8, FontFamily = new FontFamily("Tahoma"), Foreground = new SolidColorBrush(Color.FromRgb(192, 192, 192)) }; Canvas.SetLeft(textBlock, (currentX - 8)); Canvas.SetTop(textBlock, 8); Header.Children.Add(textBlock); } l.Stroke = new SolidColorBrush(Color.FromRgb(255, 255, 255)); Header.Children.Add(l); Canvas.SetLeft(l, currentX); currentX += distanceBetweenLines; currentLine++; currentTime += timeStep; } } I had looked into BackgroundWorker, except you can't create UI elements on a non-UI thread. Is it possible at all to do drawHeaderLines in a non-UI thread? Could I use data binding for drawing the lines? Would this help with UI responsiveness? I would imagine I can use databinding, but the Styling is probably beyond my current WPF ability (coming from winforms and trying to learn what all these style objects are and binding them). Would anyone be able to supply a starting point for tempting this out? Or Google a tutorial that would get me started?

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Move Files from a Failing PC with an Ubuntu Live CD

    - by Trevor Bekolay
    You’ve loaded the Ubuntu Live CD to salvage files from a failing system, but where do you store the recovered files? We’ll show you how to store them on external drives, drives on the same PC, a Windows home network, and other locations. We’ve shown you how to recover data like a forensics expert, but you can’t store recovered files back on your failed hard drive! There are lots of ways to transfer the files you access from an Ubuntu Live CD to a place that a stable Windows machine can access them. We’ll go through several methods, starting each section from the Ubuntu desktop – if you don’t yet have an Ubuntu Live CD, follow our guide to creating a bootable USB flash drive, and then our instructions for booting into Ubuntu. If your BIOS doesn’t let you boot using a USB flash drive, don’t worry, we’ve got you covered! Use a Healthy Hard Drive If your computer has more than one hard drive, or your hard drive is healthy and you’re in Ubuntu for non-recovery reasons, then accessing your hard drive is easy as pie, even if the hard drive is formatted for Windows. To access a hard drive, it must first be mounted. To mount a healthy hard drive, you just have to select it from the Places menu at the top-left of the screen. You will have to identify your hard drive by its size. Clicking on the appropriate hard drive mounts it, and opens it in a file browser. You can now move files to this hard drive by drag-and-drop or copy-and-paste, both of which are done the same way they’re done in Windows. Once a hard drive, or other external storage device, is mounted, it will show up in the /media directory. To see a list of currently mounted storage devices, navigate to /media by clicking on File System in a File Browser window, and then double-clicking on the media folder. Right now, our media folder contains links to the hard drive, which Ubuntu has assigned a terribly uninformative label, and the PLoP Boot Manager CD that is currently in the CD-ROM drive. Connect a USB Hard Drive or Flash Drive An external USB hard drive gives you the advantage of portability, and is still large enough to store an entire hard disk dump, if need be. Flash drives are also very quick and easy to connect, though they are limited in how much they can store. When you plug a USB hard drive or flash drive in, Ubuntu should automatically detect it and mount it. It may even open it in a File Browser automatically. Since it’s been mounted, you will also see it show up on the desktop, and in the /media folder. Once it’s been mounted, you can access it and store files on it like you would any other folder in Ubuntu. If, for whatever reason, it doesn’t mount automatically, click on Places in the top-left of your screen and select your USB device. If it does not show up in the Places list, then you may need to format your USB drive. To properly remove the USB drive when you’re done moving files, right click on the desktop icon or the folder in /media and select Safely Remove Drive. If you’re not given that option, then Eject or Unmount will effectively do the same thing. Connect to a Windows PC on your Local Network If you have another PC or a laptop connected through the same router (wired or wireless) then you can transfer files over the network relatively quickly. To do this, we will share one or more folders from the machine booted up with the Ubuntu Live CD over the network, letting our Windows PC grab the files contained in that folder. As an example, we’re going to share a folder on the desktop called ToShare. Right-click on the folder you want to share, and click Sharing Options. A Folder Sharing window will pop up. Check the box labeled Share this folder. A window will pop up about the sharing service. Click the Install service button. Some files will be downloaded, and then installed. When they’re done installing, you’ll be appropriately notified. You will be prompted to restart your session. Don’t worry, this won’t actually log you out, so go ahead and press the Restart session button. The Folder Sharing window returns, with Share this folder now checked. Edit the Share name if you’d like, and add checkmarks in the two checkboxes below the text fields. Click Create Share. Nautilus will ask your permission to add some permissions to the folder you want to share. Allow it to Add the permissions automatically. The folder is now shared, as evidenced by the new arrows above the folder’s icon. At this point, you are done with the Ubuntu machine. Head to your Windows PC, and open up Windows Explorer. Click on Network in the list on the left, and you should see a machine called UBUNTU in the right pane. Note: This example is shown in Windows 7; the same steps should work for Windows XP and Vista, but we have not tested them. Double-click on UBUNTU, and you will see the folder you shared earlier! As well as any other folders you’ve shared from Ubuntu. Double click on the folder you want to access, and from there, you can move the files from the machine booted with Ubuntu to your Windows PC. Upload to an Online Service There are many services online that will allow you to upload files, either temporarily or permanently. As long as you aren’t transferring an entire hard drive, these services should allow you to transfer your important files from the Ubuntu environment to any other machine with Internet access. We recommend compressing the files that you want to move, both to save a little bit of bandwidth, and to save time clicking on files, as uploading a single file will be much less work than a ton of little files. To compress one or more files or folders, select them, and then right-click on one of the members of the group. Click Compress…. Give the compressed file a suitable name, and then select a compression format. We’re using .zip because we can open it anywhere, and the compression rate is acceptable. Click Create and the compressed file will show up in the location selected in the Compress window. Dropbox If you have a Dropbox account, then you can easily upload files from the Ubuntu environment to Dropbox. There is no explicit limit on the size of file that can be uploaded to Dropbox, though a free account begins with a total limit of 2 GB of files in total. Access your account through Firefox, which can be opened by clicking on the Firefox logo to the right of the System menu at the top of the screen. Once into your account, press the Upload button on top of the main file list. Because Flash is not installed in the Live CD environment, you will have to switch to the basic uploader. Click Browse…find your compressed file, and then click Upload file. Depending on the size of the file, this could take some time. However, once the file has been uploaded, it should show up on any computer connected through Dropbox in a matter of minutes. Google Docs Google Docs allows the upload of any type of file – making it an ideal place to upload files that we want to access from another computer. While your total allocation of space varies (mine is around 7.5 GB), there is a per-file maximum of 1 GB. Log into Google Docs, and click on the Upload button at the top left of the page. Click Select files to upload and select your compressed file. For safety’s sake, uncheck the checkbox concerning converting files to Google Docs format, and then click Start upload. Go Online – Through FTP If you have access to an FTP server – perhaps through your web hosting company, or you’ve set up an FTP server on a different machine – you can easily access the FTP server in Ubuntu and transfer files. Just make sure you don’t go over your quota if you have one. You will need to know the address of the FTP server, as well as the login information. Click on Places > Connect to Server… Choose the FTP (with login) Service type, and fill in your information. Adding a bookmark is optional, but recommended. You will be asked for your password. You can choose to remember it until you logout, or indefinitely. You can now browse your FTP server just like any other folder. Drop files into the FTP server and you can retrieve them from any computer with an Internet connection and an FTP client. Conclusion While at first the Ubuntu Live CD environment may seem claustrophobic, it has a wealth of options for connecting to peripheral devices, local computers, and machines on the Internet – and this article has only scratched the surface. Whatever the storage medium, Ubuntu’s got an interface for it! Similar Articles Productive Geek Tips Backup Your Windows Live Writer SettingsMove a Window Without Clicking the Titlebar in UbuntuRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CD TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • ODEE Green Field (Windows) Part 5 - Deployment and Validation

    - by AndyL-Oracle
    And here we are, almost finished with our installation of Oracle Documaker Enterprise Edition ("ODEE") in a Windows green field environment. Let's recap what we've done so far: In part 1, I went over the basic process that I intended to show with installing an ODEE on a green field server. I walked you through the basic installation of Oracle 11g database In part 2, I covered the installation of WebLogic application server. In part 3, I showed you how to install SOA Suite for WebLogic. In part 4, we did the first part of the installation of ODEE itself. What remains after all of that, is the deployment of the ODEE components onto the database and application server - so let's get to it! DATABASE First, we'll deploy the schemas to the database. The schemas are created during the ODEE installation according to the responses provided during the install process. To deploy the schemas, you'll need to login to the database server in your green field environment. Open a command line and CD into ODEE_HOME\documaker\database\oracle11g.Run SQLPLUS as SYSDBA and execute dmkr_admin.sql:  sqlplus / as sysdba @dmkr_admin.sql Execute dmkr_asline.sql, dmkr_admin_correspondence_example.sql.  If you require additional languages, run the appropriate SQL scripts (e.g. dmkr_asline_es.sql for Spanish). APPLICATION SERVER Next, we'll deploy the WebLogic domain and it's components - Documaker web services, Documaker Interactive, Documaker dashboard, and more. To deploy the components, you'll need to login to the application server in your green field environment. 1. Open Windows Explorer and navigate to ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts.2. Using a text editor such as Notepad++, modify weblogic_installation_properties and set location of MIDDLEWARE_HOME and ODEE HOME. If you have used the defaults you’ll probably need to change the E: to C: and that’s it. Save the changes.3. Continuing in the same directory, use your text editor to modify set_middleware_env.cmd and set the drive and path to MIDDLEWARE_HOME. If you have used the defaults you’ll probably need to just change E: to C: and that’s it. Save the changes.4. In the same directory, execute wls_create_domain.cmd by double-clicking it. This should run to completion. If it does not, review any errors and correct them, and rerun the script.5. In the same directory, execute wls_add_correspondence.cmd by double-clicking it - again this should run to completion. 6. Next, we'll start the AdminServer - this is the main WebLogic domain server. To start it, use Windows Explorer and navigate to MIDDLEWARE_HOME\user_projects\domains\idocumaker_domain. Double-click startWebLogic.cmd and the server startup will begin. Once you see output that indicates that the server status changed to RUNNING you may proceed.  a. Note: if you saw database connection errors, you probably didn’t make sure your database name and connection type match. You can change this manually in the WebLogic Console. Open a browser and navigate to http://localhost:7001/console (replace localhost with the name of your application server host if you aren't opening the browser on the server), and login with the the weblogic credential you provided in the ODEE installation process. b. Once you're logged in, open Services?Data Sources. Select dmkr_admin and click Connection Pool.  c. The end of the URL should match the connection type you chose. If you chose ServiceName, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<serviceName> and if you chose SID, the URL should be: jdbc:oracle:thin:@//<hostname>:1521/<SIDname> d. An example serviceName is a fully qualified DNS-style name, e.g. "idmaker.us.oracle.com". (It does not need to actually resolve in DNS). An example SID is just a name, e.g. IDMAKER. e. Save the change and repeat for the data source dmkr_asline.  f. You will also need to make the same changes in the ODEE_HOME/documaker/docfactory/config/context/.bindings file - open the file in a text editor, locate the URL lines and make the appropriate change, then save the file.  7. Back in the ODEE_HOME\documaker\j2ee\weblogic\oracle11g\scripts directory, execute create_users_groups.cmd. 8. In the same directory, execute create_users_groups_correspondence_example.cmd. 9. Open a browser and navigate to http://localhost:7001/jpsquery. Replace localhost with the name of your application server host if you aren't running the browser on the application server. If you changed the default port for the AdminServer from 7001, use the port you changed it to. You should see output like this: 10. Start the WebLogic managed servers by opening a command prompt and navigating to MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/. When you start the servers listed below, you will be prompted to enter the WebLogic credentials to start the server. You can prevent this by providing the credential in the startManagedwebLogic.cmd file for the WLS_USER and WLS_PASS values. Note that the credential will be stored in cleartext. To start the server, type in the command shown. a. Start the JMS Server: ./startManagedWebLogic.cmd jms_server b. Start Dashboard/Documaker Administrator: ./startManagedWebLogic.cmd dmkr_server c. Start Documaker Interactive for Correspondence: ./startManagedWebLogic.cmd idm_server SOA Composites  If you're planning on testing out the approval process components of BPEL that can be used with Documaker Interactive, then use the following steps to deploy the SOA composites. If you're not going to use BPEL, you can skip to the next section.1. Stop the servers listed in the previous section (Step 10) in the reverse order that they were started.2. Run the Domain configuration command: navigate to and execute MIDDLEWARE_HOME/wlserver_10.3/common/bin/config.cmd.3. Select Extend and click next. 4. Select the iDocumaker Domain and click Next. 5. Select the Oracle SOA Suite – 11.1.1.0 (this may automatically select other components which is OK). Click Next. 6. View the Configure JDBC resources screen. You should not make any changes. Click Next. 7. Check both connections and click Test Connections. After successful test, click Next. If the tests fail, something is broken. Go back to configure JDBC resources and check your service name/SID. 8. Check all schemas. Set a password (will be the same for all schemas). Enter the database information (service name, host name, port). Click Next. 9. Connections should test successfully. If not, go back and fix any errors. Click Next. 10. Click Next to pass through Optional Configuration. 11. Click Extend. 12. Click Done. 13. Open a terminal window and navigate to/execute: ODEE_HOME/documaker/j2ee/weblogic/oracle11g/bpel/antbuild.cmd14. Start the WebLogic Servers – AdminServer, jms_server, dmkr_server, idm_server. If you forgot how to do this, see the previous section Step 10. Note: if you previously changed the startManagedWebLogic.cmd script for WLS_USER and WLS_PASS you will need to make those changes again. 15. Start the WebLogic server soa_server1: MIDDLEWARE_HOME/user_projects/domains/idocumaker_domain/bin/startManagedWebLogic.cmd soa_server116. Open a browser to http://localhost:7001/console and login. 17. Navigate to Services?Data Sources and select DMKR_ASLINE. 18. Click the Targets tab. Check soa_server1, then click Save. Repeat for the DMKR_ADMIN data source. 19. Open a command prompt and navigate to ODEE_HOME/j2ee/weblogic/oracle11g/scripts, then execute deploy_soa.cmd. That's it! (As if that wasn't enough?) DOCUMAKER Deploy the sample MRL resources by navigating to/executing ODEE_HOME/documaker/mstrres/dmres/deploysamplemrl.bat. You should see approximately 500 resources deployed into the database. Start the Factory Services. Start?Run?services.msc. Locate the service named "ODDF xxxx" and right-click, select Start. Note that each Assembly Line has a separate Factory setup, including its own Factory service and Docupresentment service. The services are named for the assembly line and the machine on which they are installed (because you could have multiple machines servicing a single assembly line, so this allows for easy scripting to control all the services if you choose to do so. Repeat for the Docupresentment service. Note that each Assembly Line has a separate Docupresentment. Using Windows Explorer, navigate to ODEE_HOME/documaker/mstrres/dmres/input and select one of the XML files, and copy it into ODEE_HOME/documaker/hotdirectory. Note: if you chose a different hot directory during installation, copy the file there instead. Momentarily you should see the XML file disappear! Open browser and navigate to http://localhost:10001/DocumakerDashboard (previous versions 12.0-12.2 use http://localhost:10001/dashboard) and verify that job processed successfully. Note that some transactions may fail if you do not have a properly configured email server, and this is ok. You can set up a simple SMTP server (just search the internet for "SMTP developer" and you'll get several to choose from.  So... that's it? Where are we at this point? You now have a completely functional ODEE installation, from soup to nuts as they say. You can further expand your installation by doing some of the following activities: clustering WebLogic services configuring WebLogic for redundancy configuring Oracle 11g for RAC adding additional Factory servers for redundancy/processing capacity setting up a real MRL (instead of the sample resources) testing Documaker Web Services for job submission and more!  I certainly hope you've enjoyed this and find it useful. If you find yourself running into trouble, visit the Oracle Community for Documaker - there is plenty of activity there and you can ask questions. For more concentrated assistance, you can engage an Oracle consultant who is a subject matter expert to assist you. Feel free to email me [andy (dot) little (at) oracle (dot) com] and I can connect you with the appropriate resource to get started. Best of luck! -Andy 

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • A Closable jQuery Plug-in

    - by Rick Strahl
    In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out. Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout. /* * * Closable * * Makes selected DOM elements closable by making them * invisible when close icon is clicked * * Version 1.01 * @requires jQuery v1.3 or later * * Copyright (c) 2007-2010 Rick Strahl * http://www.west-wind.com/ * * Licensed under the MIT license: * http://www.opensource.org/licenses/mit-license.php Support CSS: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Options: * handle Element to place closebox into (like say a header). Use if main element and closebox container are two different elements. * closeHandler Function called when the close box is clicked. Return true to close the box return false to keep it visible. * cssClass The CSS class to apply to the close box DIV or IMG tag. * imageUrl Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling. * fadeOut Optional provide fadeOut speed. Default no fade out occurs */ (function ($) { $.fn.closable = function (options) { var opt = { handle: null, closeHandler: null, cssClass: "closebox", imageUrl: null, fadeOut: null }; $.extend(opt, options); return this.each(function (i) { var el = $(this); var pos = el.css("position"); if (!pos || pos == "static") el.css("position", "relative"); var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el; var div = opt.imageUrl ? $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") : $("<div>"); div.addClass(opt.cssClass) .click(function (e) { if (opt.closeHandler) if (!opt.closeHandler.call(this, e)) return; if (opt.fadeOut) $(el).fadeOut(opt.fadeOut); else $(el).hide(); }); if (opt.imageUrl) div.css("background-image", "none"); h.append(div); }); } })(jQuery); The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this: you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag: $().ready(function() { $('#ctl00_MainContent_panEditBook') .closable({ handle: $('#divEditBook_Header') }) .draggable({ dragDelay: 100, handle: '#divEditBook_Header' }) .shadow({ opacity: 0.25, offset: 6 }); }) The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file: .closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint', handle: $('#divEditBook_Header')}) so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel. More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this: function loadBooks() { showProgress(); // Clear the content $("#divBookListWrapper").empty(); var filter = $("#" + scriptVars.lstFiltersId).val(); Proxy.GetBooks(filter, function(books) { $(books).each(function(i) { updateBook(this); showProgress(true); }); }, onPageError); } function updateBook(book,highlight) { // try to retrieve the single item in the list by tag attribute id var item = $(".bookitem[tag=" +book.Pk +"]"); // grab and evaluate the template var html = parseTemplate(template, book); var newItem = $(html) .attr("tag", book.Pk.toString()) .click(function() { var pk = $(this).attr("tag"); editBook(this, parseInt(pk)); }) .closable({ closeHandler: function(e) { removeBook(this, e); }, imageUrl: "../../images/remove.gif" }); if (item.length > 0) item.after(newItem).remove(); else newItem.appendTo($("#divBookListWrapper")); if (highlight) { newItem .addClass("pulse") .effect("bounce", { distance: 15, times: 3 }, 400); setTimeout(function() { newItem.removeClass("pulse"); }, 1200); } } Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image. This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window. You can find more information about the .closable class behavior and options here: .closable Documentation Plug-ins make Server Control JavaScript much easier I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior: if (this.Closable && !string.IsNullOrEmpty(DragHandleID) ) { string imageUrl = this.CloseBoxImage; if (imageUrl == "WebResource" ) imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE); StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'"); if (!string.IsNullOrEmpty(this.DragHandleID)) closableOptions.Append(",handle: $('#" + this.DragHandleID + "')"); if (!string.IsNullOrEmpty(this.ClientDialogHandler)) closableOptions.Append(",handler: " + this.ClientDialogHandler); if (this.FadeOnClose) closableOptions.Append(",fadeOut: 'slow'"); startupScript.Append(@" .closable({ " + closableOptions + "})"); } The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least. I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control. Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have. Related Links Download jquery.closable West Wind Web Toolkit jQuery Plug-ins © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery   ASP.NET  JavaScript  

    Read the article

  • A Closable jQuery Plug-in

    - by Rick Strahl
    In my client side development I deal a lot with content that pops over the main page. Be it data entry ‘windows’ or dialogs or simple pop up notes. In most cases this behavior goes with draggable windows, but sometimes it’s also useful to have closable behavior on static page content that the user can choose to hide or otherwise make invisible or fade out. Here’s a small jQuery plug-in that provides .closable() behavior to most elements by using either an image that is provided or – more appropriately by using a CSS class to define the picture box layout. /* * * Closable * * Makes selected DOM elements closable by making them * invisible when close icon is clicked * * Version 1.01 * @requires jQuery v1.3 or later * * Copyright (c) 2007-2010 Rick Strahl * http://www.west-wind.com/ * * Licensed under the MIT license: * http://www.opensource.org/licenses/mit-license.php Support CSS: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Options: * handle Element to place closebox into (like say a header). Use if main element and closebox container are two different elements. * closeHandler Function called when the close box is clicked. Return true to close the box return false to keep it visible. * cssClass The CSS class to apply to the close box DIV or IMG tag. * imageUrl Allows you to specify an explicit IMG url that displays the close icon. If used bypasses CSS image styling. * fadeOut Optional provide fadeOut speed. Default no fade out occurs */ (function ($) { $.fn.closable = function (options) { var opt = { handle: null, closeHandler: null, cssClass: "closebox", imageUrl: null, fadeOut: null }; $.extend(opt, options); return this.each(function (i) { var el = $(this); var pos = el.css("position"); if (!pos || pos == "static") el.css("position", "relative"); var h = opt.handle ? $(opt.handle).css({ position: "relative" }) : el; var div = opt.imageUrl ? $("<img>").attr("src", opt.imageUrl).css("cursor", "pointer") : $("<div>"); div.addClass(opt.cssClass) .click(function (e) { if (opt.closeHandler) if (!opt.closeHandler.call(this, e)) return; if (opt.fadeOut) $(el).fadeOut(opt.fadeOut); else $(el).hide(); }); if (opt.imageUrl) div.css("background-image", "none"); h.append(div); }); } })(jQuery); The plugin can be applied against any selector that is a container (typically a div tag). The close image or close box is provided typically by way of a CssClass - .closebox by default – which supplies the image as part of the CSS styling. The default styling for the box looks something like this: .closebox { position: absolute; right: 4px; top: 4px; background-image: url(images/close.gif); background-repeat: no-repeat; width: 14px; height: 14px; cursor: pointer; opacity: 0.60; filter: alpha(opacity="80"); } .closebox:hover { opacity: 0.95; filter: alpha(opacity="100"); } Alternately you can also supply an image URL which overrides the background image in the style sheet. I use this plug-in mostly on pop up windows that can be closed, but it’s also quite handy for remove/delete behavior in list displays like this: you can find this sample here to look to play along: http://www.west-wind.com/WestwindWebToolkit/Samples/Ajax/AmazonBooks/BooksAdmin.aspx For closable windows it’s nice to have something reusable because in my client framework there are lots of different kinds of windows that can be created: Draggables, Modal Dialogs, HoverPanels etc. and they all use the client .closable plug-in to provide the closable operation in the same way with a few options. Plug-ins are great for this sort of thing because they can also be aggregated and so different components can pick and choose the behavior they want. The window here is a draggable, that’s closable and has shadow behavior and the server control can simply generate the appropriate plug-ins to apply to the main <div> tag: $().ready(function() { $('#ctl00_MainContent_panEditBook') .closable({ handle: $('#divEditBook_Header') }) .draggable({ dragDelay: 100, handle: '#divEditBook_Header' }) .shadow({ opacity: 0.25, offset: 6 }); }) The window is using the default .closebox style and has its handle set to the header bar (Book Information). The window is just closable to go away so no event handler is applied. Actually I cheated – the actual page’s .closable is a bit more ugly in the sample as it uses an image from a resources file: .closable({ imageUrl: '/WestWindWebToolkit/Samples/WebResource.axd?d=TooLongAndNastyToPrint', handle: $('#divEditBook_Header')}) so you can see how to apply a custom image, which in this case is generated by the server control wrapping the client DragPanel. More interesting maybe is to apply the .closable behavior to list scenarios. For example, each of the individual items in the list display also are .closable using this plug-in. Rather than having to define each item with Html for an image, event handler and link, when the client template is rendered the closable behavior is attached to the list. Here I’m using client-templating and the code that this is done with looks like this: function loadBooks() { showProgress(); // Clear the content $("#divBookListWrapper").empty(); var filter = $("#" + scriptVars.lstFiltersId).val(); Proxy.GetBooks(filter, function(books) { $(books).each(function(i) { updateBook(this); showProgress(true); }); }, onPageError); } function updateBook(book,highlight) { // try to retrieve the single item in the list by tag attribute id var item = $(".bookitem[tag=" +book.Pk +"]"); // grab and evaluate the template var html = parseTemplate(template, book); var newItem = $(html) .attr("tag", book.Pk.toString()) .click(function() { var pk = $(this).attr("tag"); editBook(this, parseInt(pk)); }) .closable({ closeHandler: function(e) { removeBook(this, e); }, imageUrl: "../../images/remove.gif" }); if (item.length > 0) item.after(newItem).remove(); else newItem.appendTo($("#divBookListWrapper")); if (highlight) { newItem .addClass("pulse") .effect("bounce", { distance: 15, times: 3 }, 400); setTimeout(function() { newItem.removeClass("pulse"); }, 1200); } } Here the closable behavior is applied to each of the items along with an event handler, which is nice and easy compared to having to embed the right HTML and click handling into each item in the list individually via markup. Ideally though (and these posts make me realize this often a little late) I probably should set up a custom cssClass to handle the rendering – maybe a CSS class called .removebox that only changes the image from the default box image. This example also hooks up an event handler that is fired in response to the close. In the list I need to know when the remove button is clicked so I can fire of a service call to the server to actually remove the item from the database. The handler code can also return false; to indicate that the window should not be closed optionally. Returning true will close the window. You can find more information about the .closable class behavior and options here: .closable Documentation Plug-ins make Server Control JavaScript much easier I find this plug-in immensely useful especial as part of server control code, because it simplifies the code that has to be generated server side tremendously. This is true of plug-ins in general which make it so much easier to create simple server code that only generates plug-in options, rather than full blocks of JavaScript code.  For example, here’s the relevant code from the DragPanel server control which generates the .closable() behavior: if (this.Closable && !string.IsNullOrEmpty(DragHandleID) ) { string imageUrl = this.CloseBoxImage; if (imageUrl == "WebResource" ) imageUrl = ScriptProxy.GetWebResourceUrl(this, this.GetType(), ControlResources.CLOSE_ICON_RESOURCE); StringBuilder closableOptions = new StringBuilder("imageUrl: '" + imageUrl + "'"); if (!string.IsNullOrEmpty(this.DragHandleID)) closableOptions.Append(",handle: $('#" + this.DragHandleID + "')"); if (!string.IsNullOrEmpty(this.ClientDialogHandler)) closableOptions.Append(",handler: " + this.ClientDialogHandler); if (this.FadeOnClose) closableOptions.Append(",fadeOut: 'slow'"); startupScript.Append(@" .closable({ " + closableOptions + "})"); } The same sort of block is then used for .draggable and .shadow which simply sets options. Compared to the code I used to have in pre-jQuery versions of my JavaScript toolkit this is a walk in the park. In those days there was a bunch of JS generation which was ugly to say the least. I know a lot of folks frown on using server controls, especially the UI is client centric as the example is. However, I do feel that server controls can greatly simplify the process of getting the right behavior attached more easily and with the help of IntelliSense. Often the script markup is easier is especially if you are dealing with complex, multiple plug-in associations that often express more easily with property values on a control. Regardless of whether server controls are your thing or not this plug-in can be useful in many scenarios. Even in simple client-only scenarios using a plug-in with a few simple parameters is nicer and more consistent than creating the HTML markup over and over again. I hope some of you find this even a small bit as useful as I have. Related Links Download jquery.closable West Wind Web Toolkit jQuery Plug-ins © Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery   ASP.NET  JavaScript  

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >