Search Results

Search found 10366 results on 415 pages for 'const char pointer'.

Page 363/415 | < Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >

  • Visual C++ 2010, rvalue reference bug?

    - by Sergey Shandar
    Is it a bug in Visual C++ 2010 or right behaviour? template<class T> T f(T const &r) { return r; } template<class T> T f(T &&r) { static_assert(false, "no way"); return r; } int main() { int y = 4; f(y); } I thought, the function f(T &&) should never be called but it's called with T = int &. The output: main.cpp(10): error C2338: no way main.cpp(17) : see reference to function template instantiation 'T f<int&>(T)' being compiled with [ T=int & ] Update 1 Do you know any C++x0 compiler as a reference? I've tried comeau online test-drive but could not compile r-value reference. Update 2 Workaround (using SFINAE): #include <boost/utility/enable_if.hpp> #include <boost/type_traits/is_reference.hpp> template<class T> T f(T &r) { return r; } template<class T> typename ::boost::disable_if< ::boost::is_reference<T>, T>::type f(T &&r) { static_assert(false, "no way"); return r; } int main() { int y = 4; f(y); // f(5); // generates "no way" error, as expected. }

    Read the article

  • C++ Beginner - Trouble using structs and constants!

    - by Francisco P.
    Hello everyone! I am currently working on a simple Scrabble implementation for a college project. I can't get a part of it to work, though! Check this out: My board.h: http://pastebin.com/J9t8VvvB The subroutine where the error lies: //Following snippet contained in board.cpp //I believe the function is self-explanatory... //Pos is a struct containing a char, y, a int, x and an orientation, o, which is not //used in this particular case void Board::showBoard() { Pos temp; temp.o = 0; for (temp.y = 'A'; temp.y < (65 + TOTAL_COLUMNS); ++temp.y) { for (temp.x = 1; temp-x < (1 + TOTAL_ROWS); ++temp.x) { cout << _matrix[temp].getContents(); } cout << endl; } } The errors returned on compile time: http://pastebin.com/bZv7fggq How come the error states that I am trying to compare two Pos when I am comparing chars and ints? I also really can't place these other errors... Thanks for your time!

    Read the article

  • Problem using a COM interface as parameter

    - by Cesar
    I have the following problem: I have to projects Project1 and Project2. In Project1 I have an interface IMyInterface. In Project2 I have an interface IMyInterface2 with a method that receives a pointer to IMyInterface1. When I use import "Project1.idl"; in my Project2.idl, a #include "Project1.h" appears in Project2___i.h. But this file does not even exist!. What is the proper way to import an interface defined into other library into a idl file? I tried to replace the #include "Project1.h" by *#include "Project1_i.h"* or *#include "Project1_i.c"*, but it gave me a lot of errors. I also tried to use importlib("Project1.tlb") and define my interface IMyInterface2 within the library definition. But when I compile Project2PS project, an error is raised (something like dlldata.c is not generated if no interface is defined). I tried to create a dummy Project1.h. But when Project2___i.h is compiled, compiler cannot find MyInterface1. And if I include Project1___i.h I get a lot of errors again! Apparently, it is a simple issue, but I don't know how to solve it. I'm stuck with that!. By the way, I'm using VS2008 SP1. Thanks in advance.

    Read the article

  • Boost program will not working on Linux

    - by Martin Lauridsen
    Hi SOF, I have this program which uses Boost::Asio for sockets. I pretty much altered some code from the Boost examples. The program compiles and runs just like it should on Windows in VS. However, when I compile the program on Linux and run it, I get a Segmentation fault. I posted the code here The command I use to compile it is this: c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include -I/appl/htopopt/Linux_x86_64/boost_1_43_0/include mpqs.cpp mpqs_polynomial.cpp mpqs_host.cpp -o mpqs_host -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm -L/appl/htopopt/Linux_x86_64/boost_1_43_0/lib -lboost_system -lboost_thread -static -lpthread By commenting out code, I have found out that I get the Segmentation fault due to the following line: boost::asio::io_service io_service; Can anyone provide any assistance, as to what may be the problem (and the solution)? Thanks! Edit: I tried changing the program to a minimal example, using no other libraries or headers, just boost/asio.hpp: #define DEBUG 0 #include <boost/asio.hpp> int main(int argc, char* argv[]) { boost::asio::io_service io_service; return 0; } I also removed other library inclusions and linking on compilation, however this minimal example still gives me a segmentation fault.

    Read the article

  • Jquery autocomplete with Dynamic input box.?

    - by Kaps Hasija
    I have done so much R & D for Jquery Auto complete, I found some result but not as much i needed. I am giving you code which are currently i am using . <input type="text" value="|a" name="completeMe" id="Subject" />// This input box will created by Dynamic using forloop // My Jquery Code $(function () { $("#Subject").autocomplete({ source: '/Cataloging/Bib/GetSubject', minLength: 1, select: function (event, ui) { // Do something with "ui.item.Id" or "ui.item.Name" or any of the other properties you selected to return from the action } }); }); // My Action Method public ActionResult GetSubject(string term) { term = term.Substring(2, term.Length-2); return Json(db.BibContents.Where(city => city.Value.StartsWith(term)).Select(city => city.Value), JsonRequestBehavior.AllowGet); } // My code is running with static input but while creating Dynamic I need to use live event but i don't know how can i use Live Event with this code. NOTE: I am using static value of input "|a" after rendering on action i am removing that first two char to make proper search from database. Thanks

    Read the article

  • SQLite DB open time really long Problem

    - by sxingfeng
    I am using sqlite in c++ windows, And I have a db size about 60M, When I open the sqlite db, It takes about 13 second. sqlite3* mpDB; nRet = sqlite3_open16(szFile, &mpDB); And if I closed my application and reopen it again. It takse only less then 1 second. First, I thought It is because of disk cache. So I preload the 60M db file before sqlite open, and read the file using CFile, However, after preloading, the first time is still very slow. BOOL CQFilePro::PreLoad(const CString& strPath) { boost::shared_array<BYTE> temp = boost::shared_array<BYTE>(new BYTE[PRE_LOAD_BUFFER_LENGTH]); int nReadLength; try { CFile file; if (file.Open(strPath, CFile::modeRead) == FALSE) { return FALSE; } do { nReadLength = file.Read(temp.get(), PRE_LOAD_BUFFER_LENGTH); } while (nReadLength == PRE_LOAD_BUFFER_LENGTH); file.Close(); } catch(...) { } return TRUE; } My question is what is the difference between first open and second open. How can I accelerate the sqlite open-process.

    Read the article

  • Using an initializer_list on a map of vectors

    - by Hooked
    I've been trying to initialize a map of <ints, vector<ints> > using the new 0X standard, but I cannot seem to get the syntax correct. I'd like to make a map with a single entry with key:value = 1:<3,4 #include <initializer_list> #include <map> #include <vector> using namespace std; map<int, vector<int> > A = {1,{3,4}}; .... It dies with the following error using gcc 4.4.3: error: no matching function for call to std::map<int,std::vector<int,std::allocator<int> >,std::less<int>,std::allocator<std::pair<const int,std::vector<int,std::allocator<int> > > > >::map(<brace-enclosed initializer list>) Edit Following the suggestion by Cogwheel and adding the extra brace it now compiles with a warning that can be gotten rid of using the -fno-deduce-init-list flag. Is there any danger in doing so?

    Read the article

  • Console in VS 2012 Express for C++?

    - by Live2Code
    I'm very new to programming, so be nice. I was using Eclipse for C/C++ devs for a while, but it seemed quite buggy so I was advised to switch to Visual Studio Express. I'm just testing out with a simple "Hello World" program #include <iostream> #include <string> using namespace std; int main( int argc, char ** argv ) { string response; cout << "Gimme a string: " << flush; cin >> response; cout << "The string is: " << response << endl; system("pause"); return 0; } not much to go wrong there anyway, I noticed that there is no "console" like in Eclipse. All of the text pops up in a little command prompt window. And, also, this window closes right after displaying new text if there is no other things to do after it (like a cin). I have been told that I can use system("pause") but there has to be a better way. In Eclipse, the text would not suddenly disappear because the console window closed. i know this question might be a little confusing, comment and I'll try to explain what I'm saying. Or paste the codes into your Visual Studio 2012 Express Edition. But is there a way to display all of my text and whatever in a "console" as opposed to a command prompt-type window; and why does it always close before I can read the last thing?

    Read the article

  • IE8: weird border around HTML button element

    - by s427
    I have a button element with a custom background (image+color) and no borders except for a 2px border-bottom (and a bunch of other properties --code below) which renders quite differently in Firefox and in IE8. The problem is, this is a work for a company that uses IE8 as their only browser, so it's important that the button renders well in IE8. Here's a visual comparison between the two: My question here is not about the padding difference (I'm looking into that), but about the weird border that is visible on IE8 in addition to the regular border (border-bottom). Can anyone explain to me where it comes from and how to get rid of it? Thanks in advance. Here is the HTML code: <button class="btn" id="c_edit"> <span>Annuler</span> </button> And here is the CSS: .btn { display: inline-block; margin: 0 0 7px 5px; padding: 0; color: #ddd; font-size: 14px; font-family: FrutigerLTStd55Roman, sans-serif; text-decoration: none; border: none; border-bottom: 2px solid #222; background-color: #999; background-image: url('img/btn_bg.gif'); background-position: 0 bottom; background-repeat: repeat-x; cursor: pointer; transition: all .5s ease-out; } .btn span { display: inline-block; margin: 0; padding: 8px 10px 6px 40px; background-color: transparent; background-position: 4px 0; background-repeat: no-repeat; }

    Read the article

  • Trapping messages in .NET

    - by user350632
    How can i trap a Windows system message (like WM_SETTEXT) that was sent by some window (VLC player window in my case)? I've tried to inherit NativeWindow class and override WndProc like this: class VLCFilter : NativeWindow { System.IntPtr iHandle; const int WM_SETTEXT = 0x000C; public VLCFilter() { Process p = Process.GetProcessesByName("vlc")[0]; iHandle = p.MainWindowHandle; } protected override void WndProc(ref Message aMessage) { base.WndProc(ref aMessage); if (aMessage.HWnd != iHandle) return false; if (aMessage.Msg == WM_SETTEXT) { MessageBox.Show("VLC window text changed!"); } } } I have checked with Microsoft Spy++ that WM_SETTEXT message is sent by VLC player but my code doesn't seem to get the work done. I've refered mainly to: http://www.codeproject.com/kb/dotnet/devicevolumemonitor.aspx I'm trying to make this work for some time with no success. What am I doing wrong? What I am not doing? Maybe there is easier way to do this? My initial goal is to catch when VLC player (that could be playing somewhere in the background and is not emmbed in my application) repeats its playback (have noticed that WM_SETTEXT message is sent then and I'm trying to find it out like this).

    Read the article

  • Need help profiling .NET caching extension method.

    - by rockinthesixstring
    I've got the following extension Public Module CacheExtensions Sub New() End Sub Private sync As New Object() Public Const DefaultCacheExpiration As Integer = 1200 ''# 20 minutes <Extension()> Public Function GetOrStore(Of T)(ByVal cache As Cache, ByVal key As String, ByVal generator As Func(Of T)) As T Return cache.GetOrStore(key, If(generator IsNot Nothing, generator(), Nothing), DefaultCacheExpiration) End Function <Extension()> Public Function GetOrStore(Of T)(ByVal cache As Cache, ByVal key As String, ByVal generator As Func(Of T), ByVal expireInSeconds As Double) As T Return cache.GetOrStore(key, If(generator IsNot Nothing, generator(), Nothing), expireInSeconds) End Function <Extension()> Public Function GetOrStore(Of T)(ByVal cache As Cache, ByVal key As String, ByVal obj As T) As T Return cache.GetOrStore(key, obj, DefaultCacheExpiration) End Function <Extension()> Public Function GetOrStore(Of T)(ByVal cache As Cache, ByVal key As String, ByVal obj As T, ByVal expireInSeconds As Double) As T Dim result = cache(key) If result Is Nothing Then SyncLock sync If result Is Nothing Then result = If(obj IsNot Nothing, obj, Nothing) cache.Insert(key, result, Nothing, DateTime.Now.AddSeconds(expireInSeconds), cache.NoSlidingExpiration) End If End SyncLock End If Return DirectCast(result, T) End Function End Module From here, I'm using the extension is a TagService to get a list of tags Public Function GetTagNames() As List(Of String) Implements Domain.ITagService.GetTags ''# We're not using a dynamic Cache key because the list of TagNames ''# will persist across all users in all regions. Return HttpRuntime.Cache.GetOrStore(Of List(Of String))("TagNamesOnly", Function() _TagRepository.Read().Select(Function(t) t.Name).OrderBy(Function(t) t).ToList()) End Function All of this is pretty much straight forward except when I put a breakpoint on _TagRepository.Read(). The problem is that it is getting called on every request, when I thought that it is only to be called when Result Is Nothing Am I missing something here?

    Read the article

  • Why won't this work; opencv Mat_<float>

    - by user1371674
    I can't seem to get this to work. I'm trying to get the pixel value of an image but first need to change the color of the image, but since I cannot use int or just Mat because the values are not whole numbers, I have to use and because of that errors pop up when I try to run this on the cmd. int main(int argc, char **argv) { Mat img = imread(argv[1]); ofstream myfile; Mat_<float> MatBlue = img; int rows1 = MatBlue.rows; int cols1 = MatBlue.cols; for(int x = 0; x < cols1; x++) { for(int y = 0; y < rows1; y++) { float val = MatBlue.at<cv::Vec3b>(y, x)[1]; MatBlue.at<cv::Vec3b>(y, x)[0] = val + 1; } } }

    Read the article

  • fopen / fopen_s and writing to files

    - by yCalleecharan
    Hi, I'm using fopen in C to write the output to a text file. The function declaration is (where ARRAY_SIZE has been defined earlier): void create_out_file(char file_name[],long double *z1){ FILE *out; int i; if((out = fopen(file_name, "w+")) == NULL){ fprintf(stderr, "* Open error on output file %s", file_name); exit(-1); } for(i = 0; i < ARRAY_SIZE; i++) fprintf(out, "%.16Le\n", z1[i]); fclose(out); } My questions: On compilation with MVS2008 I get the warning: warning C4996: 'fopen': This function or variable may be unsafe. Consider using fopen_s instead. I haven't see much information on fopen_s so that I can change my code. Any suggestions? Can one instruct fprintf to write at desired precision? If I'm using long double then I assume that my answers are good till 15 digits after the decimal point. Am I right? Thanks a lot...

    Read the article

  • What's the most trivial function that would benfit from being computed on a GPU?

    - by hanDerPeder
    Hi. I'm just starting out learning OpenCL. I'm trying to get a feel for what performance gains to expect when moving functions/algorithms to the GPU. The most basic kernel given in most tutorials is a kernel that takes two arrays of numbers and sums the value at the corresponding indexes and adds them to a third array, like so: __kernel void add(__global float *a, __global float *b, __global float *answer) { int gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; } __kernel void sub(__global float* n, __global float* answer) { int gid = get_global_id(0); answer[gid] = n[gid] - 2; } __kernel void ranksort(__global const float *a, __global float *answer) { int gid = get_global_id(0); int gSize = get_global_size(0); int x = 0; for(int i = 0; i < gSize; i++){ if(a[gid] > a[i]) x++; } answer[x] = a[gid]; } I am assuming that you could never justify computing this on the GPU, the memory transfer would out weight the time it would take computing this on the CPU by magnitudes (I might be wrong about this, hence this question). What I am wondering is what would be the most trivial example where you would expect significant speedup when using a OpenCL kernel instead of the CPU?

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • How to make ActiveRecord work with legacy partitioned/sharded databases/tables?

    - by Utensil
    thanks for your time first...after all the searching on google, github and here, and got more confused about the big words(partition/shard/fedorate),I figure that I have to describe the specific problem I met and ask around. My company's databases deals with massive users and orders, so we split databases and tables in various ways, some are described below: way database and table name shard by (maybe it's should be called partitioned by?) YZ.X db_YZ.tb_X order serial number last three digits YYYYMMDD. db_YYYYMMDD.tb date YYYYMM.DD db_YYYYMM.tb_ DD date too The basic concept is that databases and tables are seperated acording to a field(not nessissarily the primary key), and there are too many databases and too many tables, so that writing or magically generate one database.yml config for each database and one model for each table isn't possible or at least not the best solution. I looked into drnic's magic solutions, and datafabric, and even the source code of active record, maybe I could use ERB to generate database.yml and do database connection in around filter, and maybe I could use named_scope to dynamically decide the table name for find, but update/create opertions are bounded to "self.class.quoted_table_name" so that I couldn't easily get my problem solved. And even I could generate one model for each table, because its amount is up to 30 most. But this is just not DRY! What I need is a clean solution like the following DSL: class Order < ActiveRecord::Base shard_by :order_serialno do |key| [get_db_config_by(key), #because some or all of the databaes might share the same machine in a regular way or can be configed by a hash of regex, and it can also be a const get_db_name_by(key), get_tb_name_by(key), ] end end Can anybody enlight me? Any help would be greatly appreciated~~~~

    Read the article

  • convert SQL Server StoredPorcedure to MySql

    - by karthik
    I need to covert the following SP of SQL Server To MySql. I am new to MySql.. Help needed. CREATE PROC InsertGenerator (@tableName varchar(100)) as --Declare a cursor to retrieve column specific information --for the specified table DECLARE cursCol CURSOR FAST_FORWARD FOR SELECT column_name,data_type FROM information_schema.columns WHERE table_name = @tableName OPEN cursCol DECLARE @string nvarchar(3000) --for storing the first half --of INSERT statement DECLARE @stringData nvarchar(3000) --for storing the data --(VALUES) related statement DECLARE @dataType nvarchar(1000) --data types returned --for respective columns SET @string='INSERT '+@tableName+'(' SET @stringData='' DECLARE @colName nvarchar(50) FETCH NEXT FROM cursCol INTO @colName,@dataType IF @@fetch_status<>0 begin print 'Table '+@tableName+' not found, processing skipped.' close curscol deallocate curscol return END WHILE @@FETCH_STATUS=0 BEGIN IF @dataType in ('varchar','char','nchar','nvarchar') BEGIN SET @stringData=@stringData+'''''''''+ isnull('+@colName+','''')+'''''',''+' END ELSE if @dataType in ('text','ntext') --if the datatype --is text or something else BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(2000)),'''')+'''''',''+' END ELSE IF @dataType = 'money' --because money doesn't get converted --from varchar implicitly BEGIN SET @stringData=@stringData+'''convert(money,''''''+ isnull(cast('+@colName+' as varchar(200)),''0.0000'')+''''''),''+' END ELSE IF @dataType='datetime' BEGIN SET @stringData=@stringData+'''convert(datetime,''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+''''''),''+' END ELSE IF @dataType='image' BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast(convert(varbinary,'+@colName+') as varchar(6)),''0'')+'''''',''+' END ELSE --presuming the data type is int,bit,numeric,decimal BEGIN SET @stringData=@stringData+'''''''''+ isnull(cast('+@colName+' as varchar(200)),''0'')+'''''',''+' END SET @string=@string+@colName+',' FETCH NEXT FROM cursCol INTO @colName,@dataType END

    Read the article

  • iPhone - Using sql database - insert statement failing

    - by Satyam svv
    Hi, I'm using sqlite database in my iphone app. I've a table which has 3 integer columns. I'm using following code to write to that database table. -(BOOL)insertTestResult { NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString* documentsDirectory = [paths objectAtIndex:0]; NSString* dataBasePath = [documentsDirectory stringByAppendingPathComponent:@"test21.sqlite3"]; BOOL success = NO; sqlite3* database = 0; if(sqlite3_open([dataBasePath UTF8String], &database) == SQLITE_OK) { BOOL res = (insertResultStatement == nil) ? createStatement(insertResult, &insertResultStatement, database) : YES; if(res) { int i = 1; sqlite3_bind_int(insertResultStatement, 0, i); sqlite3_bind_int(insertResultStatement, 1, i); sqlite3_bind_int(insertResultStatement, 2, i); int err = sqlite3_step(insertResultStatement); if(SQLITE_ERROR == err) { NSAssert1(0, @"Error while inserting Result. '%s'", sqlite3_errmsg(database)); success = NO; } else { success = YES; } sqlite3_finalize(insertResultStatement); insertResultStatement = nil; } } sqlite3_close(database); return success;} The command sqlite3_step is always giving err as 19. I'm not able to understand where's the issue. Tables are created using following queries: CREATE TABLE [Patient] (PID integer NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,PFirstName text NOT NULL,PLastName text,PSex text NOT NULL,PDOB text NOT NULL,PEducation text NOT NULL,PHandedness text,PType text) CREATE TABLE PatientResult(PID INTEGER,PFreeScore INTEGER NOT NULL,PForcedScore INTEGER NOT NULL,FOREIGN KEY (PID) REFERENCES Patient(PID)) I've only one entry in Patient table with PID = 1 BOOL createStatement(const char* query, sqlite3_stmt** stmt, sqlite3* database){ BOOL res = (sqlite3_prepare_v2(database, query, -1, stmt, NULL) == SQLITE_OK); if(!res) NSLog( @"Error while creating %s => '%s'", query, sqlite3_errmsg(database)); return res;}

    Read the article

  • Obtaining command line arguments in a QT application

    - by morpheous
    The following snippet is from a little app I wrote using the QT framework. The idea is that the app can be run in batch mode (i.e. called by a script) or can be run interactively. It is important therefore, that I am able to parse command line arguments in order to know which mode in which to run etc. [Edit] I am debugging using QTCreator 1.3.1 on Ubuntu Karmic. The arguments are passed in the normal way (i.e. by adding them via the 'Project' settings in the QTCreator IDE). When I run the app, it appears that the arguments are not being passed to the application. The code below, is a snippet of my main() function. int main(int argc, char *argv[]) { //Q_INIT_RESOURCE(application); try { QApplication the_app(argc, argv); //trying to get the arguments into a list QStringList cmdline_args = QCoreApplication::arguments(); // Code continues ... } catch (const MyCustomException &e) { return 1; } return 0; } [Update] I have identified the problem - for some reason, although argc is correct, the elements of argv are empty strings. I put this little code snippet to print out the argv items - and was horrified to see that they were all empty. for (int i=0; i< argc; i++){ std::string s(argv[i]); //required so I can see the damn variable in the debugger std::cout << s << std::endl; } Does anyone know what on earth is going on (or a hammer)?

    Read the article

  • CustomButton Field not aligning in the center

    - by rupesh
    Hi all I have created a custom button and i am placing the bunch of Custombuttons in a verticalfieldManager , I have aligned the verticalField Manager in the center. when i am creating a default buttonField then verticalfield Manager is able to align the buttonfield in the center. but when i am assigning custombuttonfield in the verticalField Manager it's not aligning in the center. here is my custombuttoncode enter code herepublic CustomButtonField(String label,long style) { super(style); this.label = label; onPicture = Bitmap.getBitmapResource(onPicturePath); font = getFont(); this.setPadding(5,5, 5, 5); } public String getLabel() { return label; } public int getPreferredHeight() { return onPicture.getHeight(); } public int getPreferredWidth() { return onPicture.getWidth(); } protected void layout(int width , int height) { setExtent(Math.min(width, Display.getWidth()), Math.min(height,getPreferredHeight())); } protected void paint(Graphics graphics) { int texty =(getHeight()-getFont().getHeight())/2; if (isFocus()) { graphics.setColor(Color.BLACK); graphics.drawBitmap(0, 0, getWidth(), getHeight(),onPicture , 0, 0); graphics.setColor(Color.WHITE); graphics.setFont(font); graphics.drawText(label,0,texty,DrawStyle.ELLIPSIS,getWidth()); } else { graphics.drawBitmap(0, 0, getWidth(), getHeight(),onPicture , 0, 0); graphics.setColor(Color.WHITE); graphics.setFont(font); graphics.drawText(label,0,texty,DrawStyle.ELLIPSIS,getWidth()); } } public boolean isFocusable() { return true; } protected void onFocus(int direction) { super.onFocus(direction); invalidate(); } protected void onUnfocus() { super.onUnfocus(); invalidate(); } protected boolean navigationClick(int status, int time) { fieldChangeNotify(0); return true; } protected boolean keyChar(char character, int status, int time) { if (character == Keypad.KEY_ENTER) { fieldChangeNotify(0); return true; } return super.keyChar(character, status, time); } }

    Read the article

  • an algorhithm for filtering out raw txt files

    - by Roman Luštrik
    Imagine you have a .txt file of the following structure: >>> header >>> header >>> header K L M 200 0.1 1 201 0.8 1 202 0.01 3 ... 800 0.4 2 >>> end of file 50 0.1 1 75 0.78 5 ... I would like to read all the data except lines denoted by >>> and lines below the >>> end of file line. So far I've solved this using read.table(comment.char = ">", skip = x, nrow = y) (x and y are currently fixed). This reads the data between the header and >>> end of file. However, I would like to make my function a bit more plastic regarding the number of rows. Data may have values larger than 800, and consequently more rows. I could scan or readLines the file and see which row corresponds to the >>> end of file and calculate the number of lines to be read. What approach would you use?

    Read the article

  • C++ Dynamic Allocation Mismatch: Is this problematic?

    - by acanaday
    I have been assigned to work on some legacy C++ code in MFC. One of the things I am finding all over the place are allocations like the following: struct Point { float x,y,z; }; ... void someFunc( void ) { int numPoints = ...; Point* pArray = (Point*)new BYTE[ numPoints * sizeof(Point) ]; ... //do some stuff with points ... delete [] pArray; } I realize that this code is atrociously wrong on so many levels (C-style cast, using new like malloc, confusing, etc). I also realize that if Point had defined a constructor it would not be called and weird things would happen at delete [] if a destructor had been defined. Question: I am in the process of fixing these occurrences wherever they appear as a matter of course. However, I have never seen anything like this before and it has got me wondering. Does this code have the potential to cause memory leaks/corruption as it stands currently (no constructor/destructor, but with pointer type mismatch) or is it safe as long as the array just contains structs/primitive types?

    Read the article

  • C++: calling member functions within constructor?

    - by powerboy
    The following code raises a runtime error: #include <iostream> #include <iterator> #include <ext/slist> class IntList : public __gnu_cxx::slist<int> { public: typedef IntList::iterator iterator; IntList() { tail_ = begin(); } // seems that there is a problem here void append(const int node) { tail_ = insert_after(tail_, node); } private: iterator tail_; }; int main() { IntList list; list.append(1); list.append(2); list.append(3); for (IntList::iterator i = list.begin(); i != list.end(); ++i) { std::cout << *i << " "; } return 0; } Seems that the problem is in the constructor IntList(). Is it because it calls the member function begin()?

    Read the article

  • How do you use boost iterators

    - by Neil G
    It worked, and then I added the typedefs so that I could have a const_sparse_iterator as well. Now, when I compile this and try to use sparse_iterator, it says: /Users/neilrg/nn/src/./core/sparse_vector.h:331: error: invalid use of incomplete type 'struct sparse_vector<A>::sparse_iterator' Here's the code. More code here. tempalte<typename T> class sparse_vector { // There is more code at my previous question, but this might be enough...? private: template<typename base_type> class sparse_iterator_private : public boost::iterator_adaptor< sparse_iterator_private<base_type> // Derived , base_type // Base , value_type // Value , boost::random_access_traversal_tag // CategoryOrTraversal > { private: struct enabler {}; // a private type avoids misuse public: sparse_iterator_private() : sparse_iterator_private<base_type>::iterator_adaptor_(0) {} explicit sparse_iterator_private(typename array_type::iterator&& p) : sparse_iterator_private<base_type>::iterator_adaptor_(p) {} private: friend class boost::iterator_core_access; reference dereference() const { return this->base()->value; } }; public: typedef sparse_iterator_private<typename array_type::iterator> sparse_iterator; typedef sparse_iterator_private<typename array_type::const_iterator> const_sparse_iterator; };

    Read the article

  • Time gaps between host clEnqueue_xxx calls

    - by dialer
    Consider these OpenCL calls (3 memcpy DtoH, 4313 cl_float elements each): clEnqueueReadBuffer(CommandQueue, SpectrumAbsMem, CL_FALSE, 0, SpectrumMemSize, SpectrumAbs, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumReMem, CL_FALSE, 0, SpectrumMemSize, SpectrumRe, 0, NULL, NULL); clEnqueueReadBuffer(CommandQueue, SpectrumImMem, CL_FALSE, 0, SpectrumMemSize, SpectrumIm, 0, NULL, NULL); When I analyze these with the NVIDIA visual profiler, I see that the actual memcpy operation only takes 8 us, but there is a significant gap of around 130 us after each memcpy. I'm already using the supposedly asynchronous method (the CL_FALSE in the argument list). When I use only one operation, but with three times the size, the operation is way faster. Why is the time gap between the actual memcpy operations so huge, whereas the gap between the kernel execution (exactly before these three operations) and the first memcpy is only 7us? Can I get rid of it, or do I need to accumulate more data before starting a memcpy? If so, is there a convenient way how I could combine mutliple arrays into a single contiguous block of memory, but still have a cl_mem object as a separate device memory pointer to each section?

    Read the article

< Previous Page | 359 360 361 362 363 364 365 366 367 368 369 370  | Next Page >