Search Results

Search found 7715 results on 309 pages for 'ms pl'.

Page 233/309 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • Capture reload/endrequest event after server redirect to download file

    - by Prutswonder
    Inside a webpage I have an Excel download button, which redirects to a webpage that serves the requested Excel file via the application/ms-excel MIME type, which usually results in a file download in the browser. In the webpage, I have the following jQuery code: $(document).ready(function () { $(".div-export .button").click(function () { setBusy(true); }); Sys.WebForms.PageRequestManager.getInstance().add_endRequest(function () { setBusy(false); }); }); Which displays a busy animation while the user waits for the Excel file to be served. Problem is: The animation doesn't end (setBusy(false);) after the file download, because the endRequest event doesn't get fired, probably because of the server redirect. Does anyone have a workaround for this? Edit: The download button is handled in an UpdatePanel.

    Read the article

  • Best way to show code snippets in word?

    - by Larry
    Does anyone know a good way to display code in Microsoft Word documents? I have tried to include code as regular text which looks awful and gets in the way when editing regular text. I have also tried inserting objects, a WordPad document and Text Box, into the document then putting the code inside those objects. The code looks much better and is easier to avoid while editing the rest of the text. However, these objects can only span one page which makes editing a nightmare when several pages of code need to be added. Lastly, I know that there are much better editors/formats that have no problem handling this but I am stuck working with MS word.

    Read the article

  • Stored insert procedure in plpgsql

    - by crazyphoton
    I want to do something like this in PostgreSQL. I tried this: CREATE or replace FUNCTION create_patient(_name text, _email text, _phone text, _password text, _field1 text, _field2 text, _field3 timestamp, _field4 text, OUT _pid integer, OUT _id integer) RETURNS record AS $$ DECLARE _id integer; _type text; _pid integer; BEGIN _type := 'patient'; INSERT into patients (name, email, phone, field1, field2, field3) values (_name, _email, _phone, _field1, _field2, _field3) RETURNING id into _pid; INSERT into users (username, password, type, pid, phone, language) values (_email, _password, _type, _pid, _phone, _field4) RETURNING id into _id; END; $$ LANGUAGE plpgsql; But there are a lot of instances where I would not want to specify some of field1/field2/field3/field4 and want the unspecified fields to use the default value in the table. Currently that is not possible, because to call this function I need to specify all fields. TLDR; Is there a simple way to create a wrapper procedure for INSERT in PL/pgSQL where I can specify which fields I want to insert?

    Read the article

  • Mono ASP.NET COM Reference

    - by Benny
    I am sure this is a very dumb question to be asking for such a platform as Mono, but I am really stuck with .NET on one of my only remaining projects on MS platforms and would like to move away from it. The only problem is that the web site is dependent on a COM library that is simply a socket wrapper enforcing a messaging protocol. I could reverse the code (I actually made a 10k line attempt) but there's nothing better than the original if it works. Is there any way to reference a tlb export on Mono? Any advice would be greatly appreciated. Thanks in advance!

    Read the article

  • NLB and Host Header Value

    - by Hafeez
    Background: We are using MOSS 2007 in farm configuration, 2 WFE, 1 Indexer and SQL Server. MS NLB is used for load balancing. Host header value mapped to Virtual IP of Cluster in DNS, is used while creating the web applications in MOSS and all are sharing port 80. Problem: When client tries to access the web application that are configured with host header values. Both of WFEs Hangs for 5 minutes, they stop responding to ping and browser shows 'Page not found'. In the Application Log on the WFE, this error is registered "provider: TCP Provider, error: 0 - The semaphore timeout period has expired". Interestingly, the web application with no host header value and hosted on different ports is working correctly. Any clue to solve this problem will be helpful. Thks. Hafeez

    Read the article

  • asp:FileUpload not working in UpdatePanel

    - by James123
    asp:FileUpload control is not working in update panel in ascx control. Why? any work around. <span dir="ltr"> <asp:FileUpload ID="InputFile" runat="server" class="ms-fileinput" size="35" /> </span> and also I added <Triggers> <asp:PostBackTrigger ControlID="btnOK" /> </Triggers> Still it is not working.

    Read the article

  • Logging which is the best way

    - by Tony
    Hi People who talk about loggers here never talke about EventLog, I think this is good for windows system. Is it reliable, or I found it dead in some bad morning? Why not logging everything at SQLServer, I am creating E-Commerce website, if SQL server down the website will be down anyway. but I am worry about temporally connection failure, what do u think? Why everyone like files, it can be in great size, too big to handle, or maybe I will create another file when a file is too big, and I can create a file with a date. Some one tried MS Enterprise library? talk to me about it. Thanks

    Read the article

  • Thoughts on using Alpha Five v10 with Codeless AJAX for building an AJAX database app in a short amo

    - by william Hunter
    I need to build an AJAX application against our MS SQL Server database for my company. the app has to have user permissions and reporting and is pretty complex. I am really under the gun in terms of time. The company that I work for needs the app for an important project launch. A colleague/friend of mine in a different company recommended that I look at a product from Alpha Software called Alpha Five v10 with Codeless AJAX. He has told me that he has used it extensively and that it saves him a "serious boat load of time" and he says that he has not run into limitations because you can also write your own JavaScript or you wire in jQuery. Before I commit to Alpha Five v10, I would like to get any other opinions? Thanks. Norman Stern. Chicago

    Read the article

  • NHibernate transaction management in ASP.NET MVC - how should it be done?

    - by adrin
    I am writing a simple ASP.NET MVC using session per request and transaction per request patterns (custom HttpModule). It seems to work properly, but.. the performance is terrible (a simple page loads ~7 seconds). For every http request, graphical resources incuding (all images on the site) a transaction is created and that seems to delay the loading times (without the transactions loading times per one image are ~1-10 ms with transactions they are over 1 second). What is the proper way to manage transactions in ASP.NET MVC + NH stack? When i've put all transactions into my repository methods, for some obscure reasons I got 'implicit transactions' warning in NHProf (the SQL statements were executed outside transaction, even that in code session.Save()/Update()/etc methods were invoked within transaction 'using' scope and before transaction.Commit() call) BTW are implicit transactions really bad?

    Read the article

  • I need free index/fund/stock end of day quotes in CSV

    - by Janne Mikkola
    Hello, I need (free or cheap) source for end of day stock/mutual funds/index values. Major world indexes & European stocks are primary intrest. I keep seeing that yahoo/ google/ MS offer this data, yet I cant find HOWTO doc (or similar) on getting the data. Reuters is an option - ~$300/month puts it out of my range. Sample of what I am looking for: WMX.IDX,20100326,54.49,54.6,54.17,54.41,0 XAH.IDX,20100326,52.39,52.77,52.33,52.54,0 XAL.IDX,20100326,37.34,38.4,37.34,37.59,0 XAO.IDX,20100326,4896.2998,4905.2002,4848.2998,4905.2002,0 I wish to get this data into txt file in an automated manner. My platform is Linux, (I also have pc with windows & emulator in for win in linux so windows is an option) http://www.eoddata.com/ is best site I have found so far. This is quite good yet I desire more info on european finances. Please advice! Sincerely, Janne

    Read the article

  • thoughs on using Alpha Five v10 with Codeless AJAX for building an AJAX database app in a short amou

    - by william Hunter
    I need to build an AJAX application against our MS SQL Server database for my company. the app has to have user permissions and reporting and is pretty complex. I am really under the gun in terms of time. The company that I work for needs the app for an important project launch. A colleague/friend of mine in a different company recommended that I look at a product from Alpha Software called Alpha Five v10 with Codeless AJAX. He has told me that he has used it extensively and that it saves him a "serious boat load of time" and he says that he has not run into limitations because you can also write your own JavaScript or you wire in JQERY. Before I commit to Alpha Five v10, I would like to get any other opinions? Thanks. Norman Stern. Chicago

    Read the article

  • c#-excel interop - create chart on workbook as opposed to in a sheet

    - by david.barkhuizen
    Using c# MS Excel interop library, I would like to programmatically create a new chart on the workbook, as opposed to on an a sheet. The code below allows me to create a chart on an existing _Worksheet (sheet). using using Microsoft.Office.Interop.Excel; _Worksheet sheet; (assume this is a reference to a valid _Worksheet object) ChartObjects charts = (ChartObjects)sheet.ChartObjects(Type.Missing); ChartObject chartObject = (ChartObject)charts.Add(10, 80, 300, 250); Chart chart = chartObject.Chart; chart.ChartType = XlChartType.xlXYScatter; Does anyone know how to rather go about creating a chart on the workbook (i.e. where the chart is the sheet).

    Read the article

  • Cannot read value from SYS_CONTEXT

    - by AppleGrew
    I have a PL/SQL procedure which sets some variable in user session, like the following:- Dbms_Session.Set_Context( NAMESPACE =>'MY_CTX', ATTRIBUTE => 'FLAG_NAME', Value => 'some value'); Just after this (in the same procedure), I try to read the value of this flag, using:- SYS_CONTEXT('MY_CTX', 'FLAG_NAME'); The above returns nothing. How did the DB lose this value? The weirder part is that if I invoke this proc directly from Oracle SQL Developer then it works. It doesn't work when I invoke this proc from my web application from callable statement. --EDIT-- Added an example as to how we are invoking the proc from our Java code. String statement = "Begin package_name.proc_name( flag_val => :1); END;"; OracleCallableStatement st = <some object by some framework> .createCallableStatement(statement); st.setString(1, 'flag value'); st.execute(); st.close();

    Read the article

  • What open source database platform is most easily transferred from my personal machine into a window

    - by Tom
    I would like eventual interaction with MS Dynamics SL and/or MindTouch Core (running on WMware) for eventual intranet and/or internet display. I guess I am asking for front and back end recommendations for a database I am constructing, but since this is my first major project I would greatly appreciate any help and advice. I would also love an opportunity to learn a new language so the code base could be in any language. I do have a few more related questions for discussion; What is the viability of using Google hosting to provide the service to the public for free? Should I implement plone or another CMS if I have a large amount of output? Is there a structuring questionnaire or standards publication I could reference? Does UML diagramming provide additional options for portability? Thank you.

    Read the article

  • Is there a C# open-source search app which scales cheaply?

    - by domspurling
    I need to quickly replace a listings website which has the following characteristics: smallish database (10,000 items, < 1GB) < 10% of the items updated/created/removed daily most common activity is searching the whole dataset, returning 1-1000 items traffic peaks at 1m page impressions per day Scaling strategy for the existing app has been to separate read-only and read/write activity. Multiple slave databases are used for searching and writes are done to a master, which update the slaves using MS SQL replication. Since read activity is more common than write, this has proved to be a cheap way to do database load balancing, without true clustering. I now need to replace the app - are there any C# open-source apps which scale as neatly as this?

    Read the article

  • search & replace on 3000 row, 25 column spreadsheet

    - by Deca
    I'm attempting to clean up data in this (old) spreadsheet and need to remove things like single and double quotes, HTML tags and so on. Trouble is, it's a 3000 row file with 25 columns and every spreadsheet app I've tried (NeoOffice, MS Excel, Apple Numbers) chokes on it. Hard. Any ideas on how else I can clean this thing up for import to MySQL? Clearly I could go through each record manually, row by row, but would like to avoid that if at all possible. Likewise, I could write a PHP script to handle it on import, but don't want to put the server into a death spiral either.

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • what exactly is the danger of an uninitialized pointer in C

    - by akh2103
    I am trying get a handle on C as I work my way thru Jim Trevor's "Cyclone: A safe dialect of C" for a PL class. Trevor and his co-authors are trying to make a safe version of C, so they eliminate uninitialized pointers in their language. Googling around a bit on uninitialized pointers, it seems like un-initialized pointers point to random locations in memory. It seems like this alone makes them unsafe. If you reference an un-itilialized pointer, you jump to an unsafe part of memory. Period. But the way Trevor talks about them seems to imply that it is more complex. He cites the following code, and explains that when the function FrmGetObjectIndex dereferences f, it isn’t accessing a valid pointer, but rather an unpredictable address — whatever was on the stack when the space for f was allocated. What does Trevor mean by "whatever was on the stack when the space for f was allocated"? Are "un-initialized" pointers initialized to random locations in memory by default? Or does their "random" behavior have to do with the memory allocated for these pointers getting filled with strange values (that are then referenced) because of unexpected behavior on the stack. Form *f; switch (event->eType) { case frmOpenEvent: f = FrmGetActiveForm(); ... case ctlSelectEvent: i = FrmGetObjectIndex(f, field); ... }

    Read the article

  • IntelliJ IDEA non standard caret behaviour

    - by Vaat666
    I have an issue with IntelliJ IDEA when selecting a big amount of text, and I cannot find the parameter to set to change that. Here is an example of the situation: My caret is on line 3 I scroll with the mouse wheel towards line 300 I press ctrl + shift I press the left button of the mouse Such an action would result in the text from line 3 to 300 being selected in all common editors (even in MS-Word I think), but not in IntelliJ. Do you know how to set this right? Thanks!

    Read the article

  • Is Perl's flip-flop operator bugged? It has global state, how can I reset it?

    - by Evan Carroll
    I'm dismayed. Ok, so this was probably the most fun perl bug I've ever found. Even today I'm learning new stuff about perl. Essentially, the flip-flop operator .. which returns false until the left-hand-side returns true, and then true until the right-hand-side returns false keep global state (or that is what I assume.) My question is can I reset it, (perhaps this would be a good addition to perl4-esque hardly ever used reset())? Or, is there no way to use this operator safely? I also don't see this (the global context bit) documented anywhere in perldoc perlop is this a mistake? Code use feature ':5.10'; use strict; use warnings; sub search { my $arr = shift; grep { !( /start/ .. /never_exist/ ) } @$arr; } my @foo = qw/foo bar start baz end quz quz/; my @bar = qw/foo bar start baz end quz quz/; say 'first shot - foo'; say for search \@foo; say 'second shot - bar'; say for search \@bar; Spoiler $ perl test.pl first shot foo bar second shot

    Read the article

  • How to determine which source files are required for an Eclipse run configuration

    - by isme
    When writing code in an Eclipse project, I'm usually quite messy and undisciplined in how I create and organize my classes, at least in the early hacky and experimental stages. In particular, I create more than one class with a main method for testing different ideas that share most of the same classes. If I come up with something like a useful app, I can export it to a runnable jar so I can share it with friends. But this simply packs up the whole project, which can become several megabytes big if I'm relying on large library such as httpclient. Also, if I decide to refactor my lump of code into several projects once I work out what works, and I can't remember which source files are used in a particular run configuration, all I can do it copy the main class to a new project and then keep copying missing types till the new project compiles. Is there a way in Eclipse to determine which classes are actually used in a particular run configuration? EDIT: Here's an example. Say I'm experimenting with web scraping, and so far I've tried to scrape the search-result pages of both youtube.com and wrzuta.pl. I have a bunch of classes that implement scraping in general, a few that are specific to each of youtube and wrzuta. On top of this I have a basic gui common to both scrapers, but a few wrzuta- and youtube-specific buttons and options. The WrzutaGuiMain and YoutubeGuiMain classes each contain a main method to configure and show the gui for each respective website. Can Eclipse look at each of these to determine which types are referenced?

    Read the article

  • How to install DBD::mysql on OS X server?

    - by Zoran Simic
    Trying to install DBD::mysql on OS X Server 10.6 (mac mini server). But I'm missing the mysql headers apparently. Since mysql is already part of OS X Server 10.6, I would like to NOT install anything else (no fink or darwin ports installs), just whatever's needed to get DBD::mysql installed and working. Do you know how I could do that? Do I have to install the headers somewhere? And if so, where? (again: I don't want to install another version of mysql on the box, want to use the version it came with). Is there a way to install DBD::mysql without compiling any C files? This is the error I get (the actual error is much longer, but these are the most meaningful bits, this is the first error reported). Checking if your kit is complete... Looks good Unrecognized argument in LIBS ignored: '-pipe' Note (probably harmless): No library found for -lmysqlclient Multiple copies of Driver.xst found in: /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ /System/Library/Perl/Extras/5.10.0/darwin-thread-multi-2level/auto/DBI/ at Makefile.PL line 907 Using DBI 1.611 (for perl 5.010000 on darwin-thread-multi-2level) installed in /Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI/ Writing Makefile for DBD::mysql cp lib/DBD/mysql.pm blib/lib/DBD/mysql.pm cp lib/DBD/mysql/GetInfo.pm blib/lib/DBD/mysql/GetInfo.pm cp lib/DBD/mysql/INSTALL.pod blib/lib/DBD/mysql/INSTALL.pod cp lib/Bundle/DBD/mysql.pm blib/lib/Bundle/DBD/mysql.pm gcc-4.2 -c -I/Library/Perl/5.10.0/darwin-thread-multi-2level/auto/DBI -I/usr/include -fno-omit-frame-pointer -pipe -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDBD_MYSQL_INSERT_ID_IS_GOOD -g -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -Os -DVERSION=\"4.014\" -DXS_VERSION=\"4.014\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" dbdimp.c In file included from dbdimp.c:20: dbdimp.h:22:49: error: mysql.h: No such file or directory dbdimp.h:23:45: error: mysqld_error.h: No such file or directory dbdimp.h:25:49: error: errmsg.h: No such file or directory

    Read the article

  • Perl, regex, extract data from a line

    - by perlnoob
    Im trying to extract part of a line with perl use strict; use warnings; # Set path for my.txt and extract datadir my @myfile = "C:\backups\MySQL\my.txt"; my @datadir = ""; open READMYFILE, @myfile or die "Error, my.txt not found.\n"; while (<READMYFILE>) { # Read file and extract DataDir path if (/C:\backups/gi) { push @datadir, $_; } } # ensure the path was found print @datadir . " \n"; Basically at first im trying to set the location of the my.txt file. Next im trying to read it and pull part of the line with regex. The error Im getting is: Unrecognized escape \m passed through at 1130.pl line 17. I took a look at http://stackoverflow.com/questions/1040657/how-can-i-grab-multiple-lines-after-a-matching-line-in-perl to get an idea of how to read a file and match a line within it, however im not 100% sure I'm doing this right or in the best way. I also seem to produce the error: Error, my.txt not found. But the file does exist in the folder C:\backups\MySQL\

    Read the article

  • How to use ORDER BY, LOWER .. in SQL SERVER 2008 with non-unicode languages

    - by hgulyan
    Hi, The question is about Armenian. I'm using sql server 2005, collation SQL_Latin1_General_CP1_CI_AS, data mostly is in Armenian and we can't use unicode. I tested on ms sql 2008 with a windows collation for armenian language ( Cyrillic_General_100_ ), I have found here, ( http://msdn.microsoft.com/en-us/library/ms188046.aspx ) but it didn't help. I have a function, that orders hex values and lower function, which takes each char in string and covnerts it to it's lower form, but it's not acceptable solution, it works really slow, calling that functions on every column of a huge table. Is there any solution for this issue not using unicode and working with hex values manually?

    Read the article

  • Vim + OmniCppComplete and completing members of class members

    - by Robert S. Barnes
    I've noticed that I can't seem to complete members of class members using OmniCppComplete. For example, given the following files: // foo.h #include <string> class foo { public: void set_str(const std::string &); std::string get_str_reverse( void ); private: std::string str; }; // foo.cpp #include "foo.h" using std::string; string foo::get_str_reverse ( void ) { string temp; temp.assign(str); reverse(temp.begin(), temp.end()); return temp; } /* ----- end of method foo::get_str ----- */ void foo::set_str ( const string &s ) { str.assign(s); } /* ----- end of method foo::set_str ----- */ I've set up tags for stdlibc++ and generated the tags for these two files using: ctags -R --c++-kinds=+pl --fields=+iaS --extra=+q . When I type temp. in the cpp I get a list of string member functions as expected. But if I type str. omnicomplete spits out "Pattern Not Found". I've noticed that the temp. completion only works if I have the using std::string; declaration. How do I get completion to work on my class members?

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >