Search Results

Search found 1670 results on 67 pages for 'prefix'.

Page 21/67 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Debugging matchit plugin in vim (under Cygwin)

    - by system PAUSE
    The "matchit" plugin for vim is supposed to allow you to use the % key to jump between matching start/end tags when editing HTML, as well as /* and */ comment delimiters when editing other kinds of code. I've followed the exact instructions in ":help matchit", but % still doesn't work for me. It seems silly to ask "Why doesn't this work?" so instead I'm asking How can I diagnose the problem? Pointers to references are welcome, but specific vim-plugin-debugging techniques are preferred. Here is the ~/.vim directory: $ ls -ltaGR ~/.vim /cygdrive/y/.vim: total 0 drwxr-xr-x 1 spause 0 Sep 17 13:20 .. drwxr-xr-x 1 spause 0 Sep 16 13:59 doc drwxr-xr-x 1 spause 0 Sep 16 13:58 . drwxr-xr-x 1 spause 0 Sep 16 13:58 plugin /cygdrive/y/.vim/doc: total 24 -rw-r--r-- 1 spause 1961 Sep 16 13:59 tags drwxr-xr-x 1 spause 0 Sep 16 13:59 . -rw-r--r-- 1 spause 19303 Sep 16 13:58 matchit.txt drwxr-xr-x 1 spause 0 Sep 16 13:58 .. /cygdrive/y/.vim/plugin: total 32 drwxr-xr-x 1 spause 0 Sep 16 13:58 .. -rw-r--r-- 1 spause 30714 Sep 16 13:58 matchit.vim drwxr-xr-x 1 spause 0 Sep 16 13:58 . I am running vim 7.2 under Cygwin (installed Fall 2008). cygcheck shows: 1829k 2008/06/12 C:\cygwin\bin\cygwin1.dll Cygwin DLL version info: DLL version: 1.5.25 DLL epoch: 19 DLL bad signal mask: 19005 DLL old termios: 5 DLL malloc env: 28 API major: 0 API minor: 156 Shared data: 4 DLL identifier: cygwin1 Mount registry: 2 Cygnus registry name: Cygnus Solutions Cygwin registry name: Cygwin Program options name: Program Options Cygwin mount registry name: mounts v2 Cygdrive flags: cygdrive flags Cygdrive prefix: cygdrive prefix Cygdrive default prefix: Build date: Thu Jun 12 19:34:46 CEST 2008 CVS tag: cr-0x5f1 Shared id: cygwin1S4 In vim, :set shows: --- Options --- autoindent fileformat=dos shiftwidth=3 background=dark filetype=html syntax=html cedit=^F scroll=24 tabstop=3 expandtab shelltemp textmode viminfo='20,<50,s10,h Notably, the syntax and filetype are both recognized as HTML. (The syntax colouring is just fine.) If additional info is needed, please comment. UPDATE: Per answer by too much php: After trying vim -V1, I had changed my .vimrc to include a line set nocp so the compatible option is not on. :let loadad_matchit loaded_matchit #1 :set runtimepath? runtimepath=~/.vim,/usr/share/vim/vimfiles,/usr/share/vim/vim72,/usr/share/vim/vimfiles/after,~/.vim/after (~ is /cygdrive/y) Per answer by michael: :scriptnames 1: /cygdrive/y/.vimrc 2: /usr/share/vim/vim72/syntax/syntax.vim 3: /usr/share/vim/vim72/syntax/synload.vim 4: /usr/share/vim/vim72/syntax/syncolor.vim 5: /usr/share/vim/vim72/filetype.vim 6: /usr/share/vim/vim72/colors/evening.vim 7: /cygdrive/y/.vim/plugin/matchit.vim 8: /cygdrive/y/.vim/plugin/python_match.vim 9: /usr/share/vim/vim72/plugin/getscriptPlugin.vim 10: /usr/share/vim/vim72/plugin/gzip.vim 11: /usr/share/vim/vim72/plugin/matchparen.vim 12: /usr/share/vim/vim72/plugin/netrwPlugin.vim 13: /usr/share/vim/vim72/plugin/rrhelper.vim 14: /usr/share/vim/vim72/plugin/spellfile.vim 15: /usr/share/vim/vim72/plugin/tarPlugin.vim 16: /usr/share/vim/vim72/plugin/tohtml.vim 17: /usr/share/vim/vim72/plugin/vimballPlugin.vim 18: /usr/share/vim/vim72/plugin/zipPlugin.vim 19: /usr/share/vim/vim72/syntax/html.vim 20: /usr/share/vim/vim72/syntax/javascript.vim 21: /usr/share/vim/vim72/syntax/vb.vim 22: /usr/share/vim/vim72/syntax/css.vim Note that matchit.vim, html.vim, tohtml.vim, css.vim, and javascript.vim are all present. :echo b:match_words E121: Undefined variable: b:match_words E15: Invalid expression: b:match_words Hm, this looks highly relevant. I'm now looking through :help matchit-debug to find out how to fix b:match_words.

    Read the article

  • What is the best way to solve an Objective-C namespace collision?

    - by Mecki
    Objective-C has no namespaces; it's much like C, everything is within one global namespace. Common practice is to prefix classes with initials, e.g. if you are working at IBM, you could prefix them with "IBM"; if you work for Microsoft, you could use "MS"; and so on. Sometimes the initials refer to the project, e.g. Adium prefixes classes with "AI" (as there is no company behind it of that you could take the initials). Apple prefixes classes with NS and says this prefix is reserved for Apple only. So far so well. But appending 2 to 4 letters to a class name in front is a very, very limited namespace. E.g. MS or AI could have an entirely different meanings (AI could be Artificial Intelligence for example) and some other developer might decide to use them and create an equally named class. Bang, namespace collision. Okay, if this is a collision between one of your own classes and one of an external framework you are using, you can easily change the naming of your class, no big deal. But what if you use two external frameworks, both frameworks that you don't have the source to and that you can't change? Your application links with both of them and you get name conflicts. How would you go about solving these? What is the best way to work around them in such a way that you can still use both classes? In C you can work around these by not linking directly to the library, instead you load the library at runtime, using dlopen(), then find the symbol you are looking for using dlsym() and assign it to a global symbol (that you can name any way you like) and then access it through this global symbol. E.g. if you have a conflict because some C library has a function named open(), you could define a variable named myOpen and have it point to the open() function of the library, thus when you want to use the system open(), you just use open() and when you want to use the other one, you access it via the myOpen identifier. Is something similar possible in Objective-C and if not, is there any other clever, tricky solution you can use resolve namespace conflicts? Any ideas? Update: Just to clarify this: answers that suggest how to avoid namespace collisions in advance or how to create a better namespace are certainly welcome; however, I will not accept them as the answer since they don't solve my problem. I have two libraries and their class names collide. I can't change them; I don't have the source of either one. The collision is already there and tips on how it could have been avoided in advance won't help anymore. I can forward them to the developers of these frameworks and hope they choose a better namespace in the future, but for the time being I'm searching a solution to work with the frameworks right now within a single application. Any solutions to make this possible?

    Read the article

  • Creating a custom module for Orchard

    - by Moran Monovich
    I created a custom module using this guide from Orchard documentation, but for some reason I can't see the fields in the content type when I want to create a new one. this is my model: public class CustomerPartRecord : ContentPartRecord { public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual int PhoneNumber { get; set; } public virtual string Address { get; set; } public virtual string Profession { get; set; } public virtual string ProDescription { get; set; } public virtual int Hours { get; set; } } public class CustomerPart : ContentPart<CustomerPartRecord> { [Required(ErrorMessage="you must enter your first name")] [StringLength(200)] public string FirstName { get { return Record.FirstName; } set { Record.FirstName = value; } } [Required(ErrorMessage = "you must enter your last name")] [StringLength(200)] public string LastName { get { return Record.LastName; } set { Record.LastName = value; } } [Required(ErrorMessage = "you must enter your phone number")] [DataType(DataType.PhoneNumber)] public int PhoneNumber { get { return Record.PhoneNumber; } set { Record.PhoneNumber = value; } } [StringLength(200)] public string Address { get { return Record.Address; } set { Record.Address = value; } } [Required(ErrorMessage = "you must enter your profession")] [StringLength(200)] public string Profession { get { return Record.Profession; } set { Record.Profession = value; } } [StringLength(500)] public string ProDescription { get { return Record.ProDescription; } set { Record.ProDescription = value; } } [Required(ErrorMessage = "you must enter your hours")] public int Hours { get { return Record.Hours; } set { Record.Hours = value; } } } this is the Handler: class CustomerHandler : ContentHandler { public CustomerHandler(IRepository<CustomerPartRecord> repository) { Filters.Add(StorageFilter.For(repository)); } } the Driver: class CustomerDriver : ContentPartDriver<CustomerPart> { protected override DriverResult Display(CustomerPart part, string displayType, dynamic shapeHelper) { return ContentShape("Parts_Customer", () => shapeHelper.Parts_BankCustomer( FirstName: part.FirstName, LastName: part.LastName, PhoneNumber: part.PhoneNumber, Address: part.Address, Profession: part.Profession, ProDescription: part.ProDescription, Hours: part.Hours)); } //GET protected override DriverResult Editor(CustomerPart part, dynamic shapeHelper) { return ContentShape("Parts_Customer", () => shapeHelper.EditorTemplate( TemplateName:"Parts/Customer", Model: part, Prefix: Prefix)); } //POST protected override DriverResult Editor(CustomerPart part, IUpdateModel updater, dynamic shapeHelper) { updater.TryUpdateModel(part, Prefix, null, null); return Editor(part, shapeHelper); } the migration: public class Migrations : DataMigrationImpl { public int Create() { // Creating table CustomerPartRecord SchemaBuilder.CreateTable("CustomerPartRecord", table => table .ContentPartRecord() .Column("FirstName", DbType.String) .Column("LastName", DbType.String) .Column("PhoneNumber", DbType.Int32) .Column("Address", DbType.String) .Column("Profession", DbType.String) .Column("ProDescription", DbType.String) .Column("Hours", DbType.Int32) ); return 1; } public int UpdateFrom1() { ContentDefinitionManager.AlterPartDefinition("CustomerPart", builder => builder.Attachable()); return 2; } public int UpdateFrom2() { ContentDefinitionManager.AlterTypeDefinition("Customer", cfg => cfg .WithPart("CommonPart") .WithPart("RoutePart") .WithPart("BodyPart") .WithPart("CustomerPart") .WithPart("CommentsPart") .WithPart("TagsPart") .WithPart("LocalizationPart") .Creatable() .Indexed()); return 3; } } Can someone please tell me if I am missing something?

    Read the article

  • Strange behaviour of CUDA kernel

    - by username_4567
    I'm writing code for calculating prefix sum. Here is my kernel __global__ void prescan(int *indata,int *outdata,int n,long int *sums) { extern __shared__ int temp[]; int tid=threadIdx.x; int offset=1,start_id,end_id; int *global_sum=&temp[n+2]; if(tid==0) { temp[n]=blockDim.x*blockIdx.x; temp[n+1]=blockDim.x*(blockIdx.x+1)-1; start_id=temp[n]; end_id=temp[n+1]; //cuPrintf("Value of start %d and end %d\n",start_id,end_id); } __syncthreads(); start_id=temp[n]; end_id=temp[n+1]; temp[tid]=indata[start_id+tid]; temp[tid+1]=indata[start_id+tid+1]; for(int d=n>>1;d>0;d>>=1) { __syncthreads(); if(tid<d) { int ai=offset*(2*tid+1)-1; int bi=offset*(2*tid+2)-1; temp[bi]+=temp[ai]; } offset*=2; } if(tid==0) { sums[blockIdx.x]=temp[n-1]; temp[n-1]=0; cuPrintf("sums %d\n",sums[blockIdx.x]); } for(int d=1;d<n;d*=2) { offset>>=1; __syncthreads(); if(tid<d) { int ai=offset*(2*tid+1)-1; int bi=offset*(2*tid+2)-1; int t=temp[ai]; temp[ai]=temp[bi]; temp[bi]+=t; } } __syncthreads(); if(tid==0) { outdata[start_id]=0; } __threadfence_block(); __syncthreads(); outdata[start_id+tid]=temp[tid]; outdata[start_id+tid+1]=temp[tid+1]; __syncthreads(); if(tid==0) { temp[0]=0; outdata[start_id]=0; } __threadfence_block(); __syncthreads(); if(blockIdx.x==0 && threadIdx.x==0) { for(int i=1;i<gridDim.x;i++) { sums[i]=sums[i]+sums[i-1]; } } __syncthreads(); __threadfence(); if(blockIdx.x==0 && threadIdx.x==0) { for(int i=0;i<gridDim.x;i++) { cuPrintf("****sums[%d]=%d ",i,sums[i]); } } __syncthreads(); __threadfence(); if(blockIdx.x!=gridDim.x-1) { int tid=(blockIdx.x+1)*blockDim.x+threadIdx.x; if(threadIdx.x==0) cuPrintf("Adding %d \n",sums[blockIdx.x]); outdata[tid]+=sums[blockIdx.x]; } __syncthreads(); } In above kernel, sums array will accumulate prefix sum per block and and then first thread will calculate prefix sum of this sum array. Now if I print this sum array from device side it'll show correct results while in cuPrintf("Adding %d \n",sums[blockIdx.x]); this line it prints that it is taking old value. What could be the reason?

    Read the article

  • SQL SERVER – Error: Fix – Msg 208 – Invalid object name ‘dbo.backupset’ – Invalid object name ‘dbo.backupfile’

    - by pinaldave
    Just a day before I got a very interesting email. Here is the email (modified a bit to make it relevant to this blog post). “Pinal, We are facing a very strange issue. One of our query  related to backup files and backup set has stopped working suddenly in SSMS. It works fine in application where we have and in the stored procedure but when we have it in our SSMS it gives following error. Msg 208, Level 16, State 1, Line 1 Invalid object name ‘dbo.backupfile’. Here are our queries which we are trying to execute. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; This query gives us details related to backupset and backup files when the backup was taken.” When I receive this kind of email, usually I have no answers directly. The claim that it works in stored procedure and in application but not in SSMS gives me no real data. I have requested him to very first check following two things: If he is connected to correct server? His answer was yes. If he has enough permissions? His answer was he was logged in as an admin. This means there was something more to it and I requested him to send me a screenshot of the his SSMS. He promptly sends that to me and as soon as I receive the screen shot I knew what was going on. Before I say anything take a look at the screenshot yourself and see if you can figure out why his queries are not working in SSMS. Just to make your life a bit easy, I have already given a hint in the image. The answer is very simple, the context of the database is master database. To execute above two queries the context of the database has to be msdb. Tables backupset and backupfile belong to the database msdb only. Here are two workaround or solution to above problem: 1) Change context to MSDB Above two queries when they will run as following they will not error out and will give the accurate desired result. USE msdb GO SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM dbo.backupset; SELECT logical_name, backup_size, file_type FROM dbo.backupfile; 2) Prefix the query with msdb There are cases above script used in stored procedure or part of big query, it is not possible to change the context of the whole query to any specific database. Use three part naming convention and prefix them with msdb. SELECT name, database_name, backup_size, TYPE, compatibility_level, backup_set_id FROM msdb.dbo.backupset; SELECT logical_name, backup_size, file_type FROM msdb.dbo.backupfile; Very simple solution but sometime keeps people wondering for an answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • BIP BIServer Query Debug

    - by Tim Dexter
    With some help from Bryan, I have uncovered a way of being able to debug or at least log what BIServer is doing when BIP sends it a query request. This is not for those of you querying the database directly but if you are using the BIServer and its datamodel to fetch data for a BIP report. If you have written or used the query builder against BIServer and when you run the report it chokes with a cryptic message, that you have no clue about, read on. When BIP runs a piece of BIServer logical SQL to fetch data. It does not appear to validate it, it just passes it through, so what is BIServer doing on its end? As you may know, you are not writing regular physical sql its actually logical sql e.g. select Jobs."Job Title" as "Job Title", Employees."Last Name" as "Last Name", Employees.Salary as Salary, Locations."Department Name" as "Department Name", Locations."Country Name" as "Country Name", Locations."Region Name" as "Region Name" from HR.Locations Locations, HR.Employees Employees, HR.Jobs Jobs The tables might not even be a physical tables, we don't care, that's what the BIServer and its model are for. You have put all the effort into building the model, just go get me the data from where ever it might be. The BIServer takes the logical sql and uses its vast brain to work out what the physical SQL is, executes it and passes the result back to BIP. select distinct T32556.JOB_TITLE as c1, T32543.LAST_NAME as c2, T32543.SALARY as c3, T32537.DEPARTMENT_NAME as c4, T32532.COUNTRY_NAME as c5, T32577.REGION_NAME as c6 from JOBS T32556, REGIONS T32577, COUNTRIES T32532, LOCATIONS T32569, DEPARTMENTS T32537, EMPLOYEES T32543 where ( T32532.COUNTRY_ID = T32569.COUNTRY_ID and T32532.REGION_ID = T32577.REGION_ID and T32537.DEPARTMENT_ID = T32543.DEPARTMENT_ID and T32537.LOCATION_ID = T32569.LOCATION_ID and T32543.JOB_ID = T32556.JOB_ID ) Not a very tough example I know but you get the idea. How do I know what the BIServer is up to? How can I find out what the issue might be if BIServer chokes on my query? There are a couple of steps: In the Administrator tool you need to set the logging level for the Administrator user to something greater than the default '0'. '7' is going to give you the max. Just remember to take it back down after you have finished the debug. I needed to bounce my BIServer service Now here's the secret sauce. Prefix the following to your BIP query set variable LOGLEVEL = 7; Set the log level to that you have in the admin tool Now run your BIP report. With the prefix in place; BIServer will write to the NQQuery.log file. This is located in the ./OracleBI/server/Log directory. In there you are going to find the complete process the BIServer has gone through to try and get the data back for you A quick note, if the BIServer can, its going to hit that great BIEE cache to get your data and you may not see the full log. IF this is the case. Get inot hte Administration page (via the browser login) and clear out your BIP report cursor. Then re-run. This will hopefully help out if you are trying to debug that annoying BIP report that will not run or is getting some strange data. Don't forget to turn that logging level back down once you are done. This will avoid the DBA screaming at you for sucking up all the disk space on the system.

    Read the article

  • Url rewrite subfolder to root and forbid accessing subfolder

    - by Alessandro Pezzato
    I have drupal installed in a subfolder drupal, but I want to access pages as it is in root folder: http://www.example.com instead of http://www.example.com/drupal I'm able to have this working, but it's also working with url containing subfolder, so I have http://www.example.com and a clone site in http://www.example.com/drupal What is the rule to forbid access to subfolder? I want all url starting with http://www.example.com/drupal being forbidden. This is .htaccess in / directory: Options -Indexes Options +FollowSymLinks <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteRule ^(.*+)$ drupal/$1 [L,QSA] </IfModule> And this is drupal .htaccess in /drupal/ directory: Options -Indexes Options +FollowSymLinks ErrorDocument 404 index.php DirectoryIndex index.php index.html index.htm # Override PHP settings that cannot be changed at runtime. See # sites/default/default.settings.php and drupal_initialize_variables() in # includes/bootstrap.inc for settings that can be changed at runtime. # PHP 5, Apache 1 and 2. <IfModule mod_php5.c> php_flag magic_quotes_gpc off php_flag magic_quotes_sybase off php_flag register_globals off php_flag session.auto_start off php_value mbstring.http_input pass php_value mbstring.http_output pass php_flag mbstring.encoding_translation off </IfModule> # Requires mod_expires to be enabled. <IfModule mod_expires.c> # Enable expirations. ExpiresActive On # Cache all files for 2 weeks after access (A). ExpiresDefault A1209600 <FilesMatch \.php$> # Do not allow PHP scripts to be cached unless they explicitly send cache # headers themselves. Otherwise all scripts would have to overwrite the # headers set by mod_expires if they want another caching behavior. This may # fail if an error occurs early in the bootstrap process, and it may cause # problems if a non-Drupal PHP file is installed in a subdirectory. ExpiresActive Off </FilesMatch> </IfModule> # Various rewrite rules. <IfModule mod_rewrite.c> RewriteEngine on # Block access to "hidden" directories whose names begin with a period. This # includes directories used by version control systems such as Subversion or # Git to store control files. Files whose names begin with a period, as well # as the control files used by CVS, are protected by the FilesMatch directive # above. RewriteRule "(^|/)\." - [F] # To redirect all users to access the site WITH the 'www.' prefix, # (http://example.com/... will be redirected to http://www.example.com/...) # uncomment the following: # RewriteCond %{HTTP_HOST} !^www\. [NC] # RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] # # To redirect all users to access the site WITHOUT the 'www.' prefix, # (http://www.example.com/... will be redirected to http://example.com/...) # uncomment the following: RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^ http://%1%{REQUEST_URI} [L,R=301] RewriteBase /drupal # Pass all requests not referring directly to files in the filesystem to # index.php. Clean URLs are handled in drupal_environment_initialize(). RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico #RewriteRule ^ index.php [L] RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch> </IfModule> </IfModule>

    Read the article

  • Subterranean IL: Custom modifiers

    - by Simon Cooper
    In IL, volatile is an instruction prefix used to set a memory barrier at that instruction. However, in C#, volatile is applied to a field to indicate that all accesses on that field should be prefixed with volatile. As I mentioned in my previous post, this means that the field definition needs to store this information somehow, as such a field could be accessed from another assembly. However, IL does not have a concept of a 'volatile field'. How is this information stored? Attributes The standard way of solving this is to apply a VolatileAttribute or similar to the field; this extra metadata notifies the C# compiler that all loads and stores to that field should use the volatile prefix. However, there is a problem with this approach, namely, the .NET C++ compiler. C++ allows methods to be overloaded using properties, like volatile or const, on the parameters; this is perfectly legal C++: public ref class VolatileMethods { void Method(int *i) {} void Method(volatile int *i) {} } If volatile was specified using a custom attribute, then the VolatileMethods class wouldn't be compilable to IL, as there is nothing to differentiate the two methods from each other. This is where custom modifiers come in. Custom modifiers Custom modifiers are similar to custom attributes, but instead of being applied to an IL element separately to its declaration, they are embedded within the field or parameter's type signature itself. The VolatileMethods class would be compiled to the following IL: .class public VolatileMethods { .method public instance void Method(int32* i) {} .method public instance void Method( int32 modreq( [mscorlib]System.Runtime.CompilerServices.IsVolatile)* i) {} } The modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) is the custom modifier. This adds a TypeDef or TypeRef token to the signature of the field or parameter, and even though they are mostly ignored by the CLR when it's executing the program, this allows methods and fields to be overloaded in ways that wouldn't be allowed using attributes. Because the modifiers are part of the signature, they need to be fully specified when calling such a method in IL: call instance void Method( int32 modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)*) There are two ways of applying modifiers; modreq specifies required modifiers (like IsVolatile), and modopt specifies optional modifiers that can be ignored by compilers (like IsLong or IsConst). The type specified as the modifier argument are simple placeholders; if you have a look at the definitions of IsVolatile and IsLong they are completely empty. They exist solely to be referenced by a modifier. Custom modifiers are used extensively by the C++ compiler to specify concepts that aren't expressible in IL, but still need to be taken into account when calling method overloads. C++ and C# That's all very well and good, but how does this affect C#? Well, the C++ compiler uses modreq(IsVolatile) to specify volatility on both method parameters and fields, as it would be slightly odd to have the same concept represented using a modifier or attribute depending on what it was applied to. Once you've compiled your C++ project, it can then be referenced and used from C#, so the C# compiler has to recognise the modreq(IsVolatile) custom modifier applied to fields, and vice versa. So, even though you can't overload fields or parameters with volatile using C#, volatile needs to be expressed using a custom modifier rather than an attribute to guarentee correct interoperability and behaviour with any C++ dlls that happen to come along. Next up: a closer look at attributes, and how certain attributes compile in unexpected ways.

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • Wireless is detected, but not connecting. Ethernet works. How to correct the wireless address?

    - by Lucas
    I am running Ubuntu 14.04 with cable internet, and my wireless is detected and connected, but I cannot connect to the internet. I know the problem is with my machine because other machines are connecting to the same router just fine. I can connect via ethernet just fine as well. Here are some notable tests: ping 192.168.0.105 works with 0% packet loss, but ping 192.168.0.1 has 100% packet loss. When I plug in my ethernet, ping 192.168.0.1 works with 0% packet loss. My wireless name is tg, and the router ip is 192.168.0.1 (where I can enter username and password). I suspect that I need to change my wireless address from 192.168.0.105 to 192.168.0.1. Any suggestions on how to proceed? extra info: [lucas@lucas-ThinkPad-W520]/home/lucas$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11abgn ESSID:"tg" Mode:Managed Frequency:2.462 GHz Access Point: 00:02:6F:83:F8:F4 Bit Rate=1 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=62/70 Signal level=-48 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:52 Invalid misc:166 Missed beacon:0 [lucas@lucas-ThinkPad-W520]/home/lucas$ ifconfig eth0 Link encap:Ethernet HWaddr f0:de:f1:b2:53:53 inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::f2de:f1ff:feb2:5353/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:980003 errors:0 dropped:0 overruns:0 frame:0 TX packets:498384 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1320506168 (1.3 GB) TX bytes:59780591 (59.7 MB) Interrupt:20 Memory:f3a00000-f3a20000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:21927 errors:0 dropped:0 overruns:0 frame:0 TX packets:21927 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1781719 (1.7 MB) TX bytes:1781719 (1.7 MB) wlan0 Link encap:Ethernet HWaddr 24:77:03:29:8f:dc inet addr:192.168.0.105 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::2677:3ff:fe29:8fdc/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11828 errors:0 dropped:0 overruns:0 frame:0 TX packets:15444 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4855662 (4.8 MB) TX bytes:2250585 (2.2 MB) [lucas@lucas-ThinkPad-W520]/home/lucas$ lspci -nn | grep 0280 03:00.0 Network controller [0280]: Intel Corporation Centrino Ultimate-N 6300 [8086:4238] (rev 3e) [lucas@lucas-ThinkPad-W520]/home/lucas$ rfkill list 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no with ethernet unplugged: [lucas@lucas-ThinkPad-W520]/home/lucas$ route -n | grep UG 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 wlan0 with ethernet plugged in: [lucas@lucas-ThinkPad-W520]/home/lucas$ route -n | grep UG 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 [lucas@lucas-ThinkPad-W520]/home/lucas$ nm-tool NetworkManager Tool State: connected (global) - Device: wlan0 [tg] ---------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: connected Default: no HW Address: 24:77:03:29:8F:DC Capabilities: Speed: 52 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) tatum: Infra, 40:8B:07:D8:A5:04, Freq 2437 MHz, Rate 54 Mb/s, Strength 42 W PA WPA2 ums: Infra, 00:20:A6:72:52:BF, Freq 2437 MHz, Rate 54 Mb/s, Strength 59 Alpha 40: Infra, 28:CF:E9:86:59:5D, Freq 5260 MHz, Rate 54 Mb/s, Strength 30 W PA WPA2 thepromiselan: Infra, 58:6D:8F:51:E5:54, Freq 2452 MHz, Rate 54 Mb/s, Strength 34 $ PA WPA2 xfinitywifi: Infra, 06:1D:D5:84:27:A0, Freq 2437 MHz, Rate 54 Mb/s, Strength 52 *tg: Infra, 00:02:6F:83:F8:F4, Freq 2462 MHz, Rate 54 Mb/s, Strength 73 W PA2 ums: Infra, 00:20:A6:A1:9F:25, Freq 2452 MHz, Rate 54 Mb/s, Strength 44 BRIAN-PC_Network:Infra, 20:AA:4B:DD:93:D6, Freq 2462 MHz, Rate 54 Mb/s, Strength 35 W PA2 HOME-C0F8: Infra, 44:32:C8:D2:C0:F8, Freq 2412 MHz, Rate 54 Mb/s, Strength 40 W PA WPA2 abcsexy: Infra, 28:28:5D:27:5D:85, Freq 2412 MHz, Rate 54 Mb/s, Strength 27 W PA WPA2 IPv4 Settings: Address: 192.168.0.105 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: e1000e State: connected Default: yes HW Address: F0:DE:F1:B2:53:53 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.0.100 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1

    Read the article

  • Some OBI EE Tricks and Tips in the Admin Tool By Gerry Langton

    - by hamsun
    How to set the log level from a Session variable Initialization block As we know it is normal to set the log level non-zero for a particular user when we wish to debug problems. However sometimes it is inconvenient to go into each user’s properties in the Admin tool and update the log level. So I am showing a method which allows the log level to be set for all users via a session initialization block. This is particularly useful for anyone wanting an alternative way to set the log level. The screen shots shown are using the OBIEE 11g SampleApp demo but are applicable to any environment. Open the appropriate rpd in on-line mode and navigate to Manage Variables. Select Session Initialization Blocks, right click in the white space and create a New Initialization Block. I called the Initialization block Set_Loglevel . Now click on ‘Edit Data Source’ to enter the SQL. Chose the ‘Use OBI EE Server’ option for the SQL. This means that the SQL provided must use tables which have been defined in the Physical layer of the RPD, and whilst there is no need to provide a connection pool you must work in On-Line mode. The SQL can access any of the RPD tables and is purely used to return a value of 2. The ‘Test’ button confirms that the SQL is valid. Next, click on the ‘Edit Data Target’ button to add the LOGLEVEL variable to the initialization block. Check the ‘Enable any user to set the value’ option so that this will work for any user. Click OK and the following message will display as LOGLEVEL is a system session variable: Click ‘Yes’. Click ‘OK’ to save the Initialization block. Then check in the On-LIne changes. To test that LOGLEVEL has been set, log in to OBIEE using an administrative login (e.g. weblogic) and reload server metadata, either from the Analysis editor or from Administration > Reload Files and Metadata link. Run a query then navigate to Administration > Manage Sessions and click ‘View Log’ for the query just issued (which should be approximately the last in the list). A log file should exist and with LOGLEVEL set to 2 should include both logical and physical sql. If more diagnostic information is required then set LOGLEVEL to a higher value. If logging is required only for a particular analysis then an alternative method can be used directly from the Analysis editor. Edit the analysis for which debugging is required and click on the Advanced tab. Scroll down to the Advanced SQL clauses section and enter the following in the Prefix box: SET VARIABLE LOGLEVEL = 2; Click the ‘Apply SQL’ button. The SET VARIABLE statement will now prefix the Analysis’s logical SQL. So that any time this analysis is run it will produce a log. You can find information about training for Oracle BI EE products here or in the OU Learning Paths. Please send me an email at [email protected] if you have any further questions. About the Author: Gerry Langton started at Siebel Systems in 1999 working as a technical instructor teaching both Siebel application development and also Siebel Analytics (which subsequently became Oracle BI EE). From 2006 Gerry has worked as Senior Principal Instructor within Oracle University specialising in Oracle BI EE, Oracle BI Publisher and Oracle Data Warehouse development for BI.

    Read the article

  • Unknown Entity namespace alias in symfony2

    - by Zoha Ali Khan
    Hey I have two bundles in my symfony2 project. one is Bundle and the other one is PatentBundle. My app/config/route.yml file is MunichInnovationGroupPatentBundle: resource: "@MunichInnovationGroupPatentBundle/Controller/" type: annotation prefix: / defaults: { _controller: "MunichInnovationGroupPatentBundle:Default:index" } MunichInnovationGroupBundle: resource: "@MunichInnovationGroupBundle/Controller/" type: annotation prefix: /v1 defaults: { _controller: "MunichInnovationGroupBundle:Patent:index" } login_check: pattern: /login_check logout: pattern: /logout inside my controller i have <?php namespace MunichInnovationGroup\PatentBundle\Controller; use Symfony\Component\HttpFoundation\Response; use Symfony\Component\HttpFoundation\Request; use JMS\SecurityExtraPatentBundle\Annotation\Secure; use Symfony\Component\Security\Core\Exception\AccessDeniedException; use Symfony\Bundle\FrameworkBundle\Controller\Controller; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Method; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Route; use Sensio\Bundle\FrameworkExtraBundle\Configuration\Template; use Symfony\Component\Security\Core\SecurityContext; use MunichInnovationGroup\PatentBundle\Entity\Log; use MunichInnovationGroup\PatentBundle\Entity\UserPatent; use MunichInnovationGroup\PatentBundle\Entity\PmPortfolios; use MunichInnovationGroup\PatentBundle\Entity\UmUsers; use MunichInnovationGroup\PatentBundle\Entity\PmPatentgroups; use MunichInnovationGroup\PatentBundle\Form\PortfolioType; use MunichInnovationGroup\PatentBundle\Util\SecurityHelper; use Exception; /** * Portfolio controller. * @Route("/portfolio") */ class PortfolioController extends Controller { /** * Index action. * * @Route("/", name="v2_pm_portfolio") * @Template("MunichInnovationGroupPatentBundle:Portfolio:index.html.twig") */ public function indexAction(Request $request) { $portfolios = $this->getDoctrine() ->getRepository('MunichInnovationGroupPatentBundle:PmPortfolios') ->findBy(array('user' => '$user_id')); // rest of the method } when i try to load localhost/web/app_dev.php/portfolio It says Unknown Entity namespace alias 'MunichInnovationGroupPatentBundle'. I am unable to figure out this error please help me if anyone has any idea I googled it a lot :( Thanks in advance 500 Internal Server Error - ORMException

    Read the article

  • Build 32-bit with 64-bit llvm-gcc

    - by Jay Conrod
    I have a 64-bit version of llvm-gcc, but I want to be able to build both 32-bit and 64-bit binaries. Is there a flag for this? I tried passing -m32 (which works on the regular gcc), but I get an error message like this: [jay@andesite]$ llvm-gcc -m32 test.c -o test Warning: Generation of 64-bit code for a 32-bit processor requested. Warning: 64-bit processors all have at least SSE2. /tmp/cchzYo9t.s: Assembler messages: /tmp/cchzYo9t.s:8: Error: bad register name `%rbp' /tmp/cchzYo9t.s:9: Error: bad register name `%rsp' ... This is backwards; I want to generate 32-bit code for a 64-bit processor! I'm running llvm-gcc 4.2, the one that comes with Ubuntu 9.04 x86-64. EDIT: Here is the relevant part of the output when I run llvm-gcc with the -v flag: [jay@andesite]$ llvm-gcc -v -m32 test.c -o test.bc Using built-in specs. Target: x86_64-linux-gnu Configured with: ../llvm-gcc4.2-2.2.source/configure --host=x86_64-linux-gnu --build=x86_64-linux-gnu --prefix=/usr/lib/llvm/gcc-4.2 --enable-languages=c,c++ --program-prefix=llvm- --enable-llvm=/usr/lib/llvm --enable-threads --disable-nls --disable-shared --disable-multilib --disable-bootstrap Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5546) (LLVM build) /usr/lib/llvm/gcc-4.2/libexec/gcc/x86_64-linux-gnu/4.2.1/cc1 -quiet -v -imultilib . test.c -quiet -dumpbase test.c -m32 -mtune=generic -auxbase test -version -o /tmp/ccw6TZY6.s I looked in /usr/lib/llvm/gcc-4.2/libexec/gcc hoping to find another binary, but the only directory there is x86_64-linux-gnu. I will probably look at compiling llvm-gcc from source with appropriate options next.

    Read the article

  • Using git subtree to clone a subdirectory of a project with versioning history then merge it back af

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ This code is from that site: git clone git://git2.kernel.org/pub/scm/git/git.git newtree=$(git subtree split --prefix=gitweb --annotate='(split) ' \ 0a8f4f0^.. --onto=1130ef3 --rejoin) git branch latest_gitweb $newtree gitk latest_gitweb Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). How would I use git-subtree to split off the bib (bibliography) directory with versioning? When I use git-subtree split --prefix=bib I get 884842f6f4e9896e2e4e9402ee0ef762cd617257 as output, but I don't know where to go from there.

    Read the article

  • Trouble pre-populating drop down and textarea from MySQL Database

    - by Tony
    I am able to successfully pre-populate my questions using the following code: First Name: <input type="text" name="first_name" size="30" maxlength="20" value="' . $row[2] . '" /><br /> However, when I try to do the same for a drop down box and a textarea box, nothing is pre-populated from the database, even though there is actual content in the database. This is the code I'm using for the drop down and textarea, respectively: <?php echo ' <form action ="edit_contact.php" method="post"> <div class="contactfirstcolumn"> Prefix: <select name = "prefix" value="' . $row[0] . '" /> <option value="blank">--</option> <option value="Dr">Dr.</option> <option value="Mr">Mr.</option> <option value="Mrs">Mrs.</option> <option value="Ms">Ms.</option> </select><br />'; ?> AND Contact Description:<textarea id = "contactdesc" name="contactdesc" rows="3" cols="50" value="' . $row[20] . '" /></textarea><br /><br /> It's important to note that I am not receiving any errors. The form loads fine, however without the data for the drop down and textarea fields. Thanks! Tony

    Read the article

  • ASP.net MVC - Update Model on complex models

    - by ludicco
    Hi there, I'm struggling myself trying to get the contents of a form which is a complex model and then update the model with that complex model. My account model has many individuals [AcceptVerbs(HttpVerbs.Post)] public ActionResult OpenAnAccount(string area,[Bind(Exclude = "Id")]Account account, [Bind(Prefix="Account.Individuals")] EntitySet<Individual> individuals){ var db = new DB(); account.individuals = invdividuals; db.Accounts.InsertOnSubmit(account); db.SubmitChanges(); } So it works nicely for adding new Records, but not for update them like: [AcceptVerbs(HttpVerbs.Post)] public ActionResult OpenAnAccount(string area,[Bind(Exclude = "Id")]Account account, [Bind(Prefix="Account.Individuals")] EntitySet<Individual> individuals){ var db = new DB(); var record = db.Accounts.Single(a => a.Reference == area); account.individuals = invdividuals; try{ UpdateModel(record, account); // I can't convert account ToValueProvider() db.SubmitChanges(); } catch{ return ... //Error Message } } My problem is being how to use UpdateModel with the account model since it's not a FormCollection. How can I convert it? How can I use ToValueProvider with a complex model? I hope I was clear enough Thanks a lot :)

    Read the article

  • WCF 3.5 Service and multiple http bindings

    - by mortenvpdk
    Hi I can't get my WCF service to work with more than one http binding. In IIS 7 I have to bindings http:/service and http:/service.test both at port 80 In my web.config I have added the baseAddressPrefixFilters but I can't add more than one <serviceHostingEnvironment> <baseAddressPrefixFilters> <add prefix="http://service"/> <add prefix="http://service.test"/> </baseAddressPrefixFilters> </serviceHostingEnvironment> This gives almost the same error "This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. " as if no filers were specified at all (This collection already contains an address with scheme http. There can be at most one address per scheme in this collection. Parameter name: item) If I add only one filter then the service works but only responds on the added filter address. I've also tried with specifing multiple endpoints like (and only one filter): <endpoint address="http://service.test" binding="basicHttpBinding" bindingConfiguration="" contract="IService" /> <endpoint address="http://service" binding="basicHttpBinding" bindingConfiguration="" contract="IService" /> Then still only the address also specified in the filter works and the other returns this error: Server Error in Application "ISPSERVICE" HTTP Error 400.0 - Bad Request Regards Morten

    Read the article

  • Writing PHP extension - Unable to load dynamic library

    - by Luke
    I'm writing a PHP extension similar to V8JS. The goal, like V8JS, is to embed the V8 engine into PHP so I can execute sandboxed JavaScript code in PHP. (The implementation is different.) The extension compiles fine, but when I attempt to run it I get: PHP Warning: PHP Startup: Unable to load dynamic library '/phpdev/lib/php/extensions/debug-zts-20090626/v8php.so' - dlopen(/phpdev/lib/php/extensions/debug-zts-20090626/v8php.so, 9): Symbol not found: __ZN2v88internal8Snapshot13context_size_E Referenced from: /phpdev/lib/php/extensions/debug-zts-20090626/v8php.so Expected in: flat namespace PHP is compiled with the prefix /phpdev (with debug and maintainer flags). v8 is compiled in /v8/ with gyp with the commands make dependencies and make x64 which produced /v8/out/x64.release and /v8/out/x64.debug. I soft-linked the header files from /v8/include to /phpdev/include and libv8_base.a from /v8/out/x64.release/libv8_base.a to /phpdev/lib/libv8.a. This is my config.m4 file: PHP_ARG_ENABLE(v8php, [V8PHP], [--enable-v8php Include V8 JavaScript Engine]) if test $PHP_V8PHP != "no"; then SEARCH_PATH="$prefix /usr/local /usr" SEARCH_FOR="/include/v8.h" if test -r $PHP_V8PHP/$SEARCH_FOR; then V8_DIR=$PHP_V8PHP else AC_MSG_CHECKING([for V8 files in default path]) for i in $SEARCH_PATH ; do if test -r $i/$SEARCH_FOR; then V8_DIR=$i AC_MSG_RESULT(found in $i) fi done fi if test -z "$V8_DIR"; then AC_MSG_RESULT([not found]) AC_MSG_ERROR([Unable to locate V8]) fi PHP_ADD_INCLUDE($V8_DIR/include) PHP_SUBST(V8PHP_SHARED_LIBADD) PHP_ADD_LIBRARY_WITH_PATH(v8, $V8_DIR/$PHP_LIBDIR, V8PHP_SHARED_LIBADD) PHP_REQUIRE_CXX() PHP_NEW_EXTENSION(v8php, v8php.cc v8_class.cc, $ext_shared) fi What am I doing wrong?

    Read the article

  • How to make Doxygen ignore specific PHP functions, when generating documentation from a purely proce

    - by Senthil
    I am writing a PHP Library and I am trying out Doxygen to generate the API documentation. My library does not use OOP. All code is procedural. I use lot of helper functions which have an _ (underscore) prefix in their names. They are not part of the publicly exposed API. They are just used internally. Even though they are commented just like the API functions, I don't want them included when giving out the documentation for the API. I want Doxygen to ignore these functions. I can think of two solutions for this, but I am not able to implement either one of them. First is, I can set some configuration in Doxygen to make it ignore specific function name patterns. I went through Doxygen help documentation and searched the web. There seems to be options to ignore file and folder name patterns. But I am not able to find an option to specify a function name pattern and make it ignore those functions. Second is, along with all the other content in the comments above functions, I could add some other keyword or something and make Doxygen ignore those functions. I haven't been able to find out how to do that either. How can I make Doxygen ignore specific PHP functions when generating documentation? Update I searched within Stack Overflow and came across this question. It looked similar to my question. I found out about EXCLUDE_SYMBOLS config option in one of the answers. You can use that to exclude function names too. More importantly, wildcards were supported. So I am able to ignore all my functions with _ as the prefix :) This ridiculous! I should've done more research :| Someone please delete this question or add this answer as an answer.

    Read the article

  • Best practices on using URIs as parameter value in REST calls.

    - by dafmetal
    I am designing a REST API where some resources can be filtered through query parameters. In some cases, these filter values would be resources from the same REST API. This makes for longish and pretty unreadable URIs. While this is not too much of a problem in itself because the URIs are meant to be created and manipulated programmatically, it makes for some painful debugging. I was thinking of allowing shortcuts to URIs used as filter values and I wonder if this is allowed according to the REST architecture and if there are any best practices. For example: I have a resource that gets me Java classes. Then the following request would give me all Java classes: GET http://example.org/api/v1/class Suppose I want all subclasses of the Collection Java class, then I would use the following request: GET http://example.org/api/v1/class?has-supertype=http://example.org/api/v1/class/collection That request would return me Vector, ArrayList and all other subclasses of the Collection Java class. That URI is quite long though. I could already shorten it by allowing hs as an alias for has-supertype. This would give me: GET http://example.org/api/v1/class?hs=http://example.org/api/v1/class/collection Another way to allow shorter URIs would be to allow aliases for URI prefixes. For example, I could define class as an alias for the URI prefix http://example.org/api/v1/class/. Which would give me the following possibility: GET http://example.org/api/v1/class?hs=class:collection Another possibility would be to remove the class alias entirely and always prefix the parameter value with http://example.org/api/v1/class/ as this is the only thing I would support. This would turn the request for all subtypes of Collection into: GET http://example.org/api/v1/class?hs=collection Do these "simplifications" of the original request URI still conform to the principles of a REST architecture? Or did I just go off the deep end?

    Read the article

  • Binary search in a sorted (memory-mapped ?) file in Java

    - by sds
    I am struggling to port a Perl program to Java, and learning Java as I go. A central component of the original program is a Perl module that does string prefix lookups in a +500 GB sorted text file using binary search (essentially, "seek" to a byte offset in the middle of the file, backtrack to nearest newline, compare line prefix with the search string, "seek" to half/double that byte offset, repeat until found...) I have experimented with several database solutions but found that nothing beats this in sheer lookup speed with data sets of this size. Do you know of any existing Java library that implements such functionality? Failing that, could you point me to some idiomatic example code that does random access reads in text files? Alternatively, I am not familiar with the new (?) Java I/O libraries but would it be an option to memory-map the 500 GB text file (I'm on a 64-bit machine with memory to spare) and do binary search on the memory-mapped byte array? I would be very interested to hear any experiences you have to share about this and similar problems.

    Read the article

  • I need my BizTalk map to stop converting xml:lang to ns1:lang

    - by Jeremy Stein
    I have a map in BizTalk 2009 that is converting some data into an XML document to be sent on to another system. The target schema includes some elements with xml:lang attributes. BizTalk generates those as ns1:lang. The target system requires that the prefix xml be used. Here is a simplified example to show what BizTalk is doing: sample.xsd <xs:schema targetNamespace="http://example.com/" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:import schemaLocation="common.xsd" namespace="http://www.w3.org/XML/1998/namespace" /> <xs:element name="example"> <xs:complexType> <xs:attribute ref="xml:lang" /> </xs:complexType> </xs:element> </xs:schema> common.xsd <?xml version="1.0" encoding="utf-16"?> <xs:schema xmlns:xml="http://www.w3.org/XML/1998/namespace" targetNamespace="http://www.w3.org/XML/1998/namespace" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:attribute name="lang" type="xs:language" /> </xs:schema> Example of map output <ns0:example xmlns:ns0="http://example.com/" xmlns:ns1="http://www.w3.org/XML/1998/namespace" ns1:lang="en-US" /> Is there some way to convince BizTalk to use the xml prefix?

    Read the article

  • Custom listbox sorting

    - by Arcadian
    I need to sort the data contained within a number of listboxes. The user will be able to select between two different types of sorting using radio boxes, one of which is checked by default on form load. I have created the IF statements needed in order to test whether the checked condition is true for that radio button. but i need some help to create the custom sort algorithms. Each list with contain similar looking data, the only difference in the prefix with which each line starts. For example each line in the first listbox starts with the prefix "G30" and the second listbox will be "G31" and so on. There are 10 listboxes in total (G30-G39 in terms of prefixes). The first search algorithm has to sort the lines by the number order of the first 13 chars. Example: This is how the data looks before sorting G35:45:58:11 JG07 G35:45:20:41 JG01 G35:58:20:21 JG03 G35:66:22:20 JG05 G35:45:85:21 JG02 G35:64:56:11 JG03 G35:76:35:11 JG02 G35:77:97:12 JG03 G35:54:29:11 JG01 G35:55:51:20 JG01 G35:76:24:20 JG06 G35:76:55:11 JG01 and this is how it should look after sorting G35:45:20:41 JG01 G35:45:58:11 JG07 G35:45:85:21 JG02 G35:54:29:11 JG01 G35:55:51:20 JG01 G35:58:20:21 JG03 G35:64:56:11 JG03 G35:66:22:20 JG05 G35:76:24:20 JG06 G35:76:35:11 JG02 G35:76:55:11 JG01 G35:77:97:12 JG03 as you can see, the prefixes are the same. so it is sorted, lowest first, by the next pair integers, then the next pair and the next but not by the value after "JG". the second sort algorithm will ignore the first 13 chars and sort by order of the value after "JG", highest first. any help? theres some rep in it for you :) thanks in advance

    Read the article

  • Apache mod_rewrite RewriteCond to by-pass static resources not working

    - by d11wtq
    I can't for the life of me fathom out why this RewriteCond is causing every request to be sent to my FastCGI application when it should in fact be letting Apache serve up the static resources. I've added a hello.txt file to my DocumentRoot to demonstrate. The text file: ls /Sites/CioccolataTest.webapp/Contents/Resources/static hello.txt` The VirtualHost and it's rewrite rules: AppClass /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest -port 5065 FastCgiExternalServer /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest.fcgi -host 127.0.0.1:5065 <VirtualHost *:80> ServerName cioccolata-test.webdev DocumentRoot /Sites/CioccolataTest.webapp/Contents/Resources/static RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest.fcgi/$1 [QSA,L] </VirtualHost> Even with the -f, Apache is directing requests for this text file (i.e. access http://cioccolata-test.webdev/hello.txt returns my app, not the text file). As a proof of concept I changed the RewriteCond to: RewriteCond %{REQUEST_URI} !^/hello.txt That made it serve the text file correctly and allowed every other request to hit the FastCGI application. Why doesn't my original version work? I need to tell apache to serve every file in the DocumentRoot as a static resource, but if the file doesn't exist it should rewrite the request to my FastCGI application. NOTE: The running FastCGI application is at /Sites/CioccolataTest.webapp/Contents/MacOS/CioccolataTest (without the .fcgi prefix)... the .fcgi prefix is only being used to tell the fastcgi module to direct the request to the app.

    Read the article

  • Spring MVC: How to resolve the path to subdirectories of the root 'JSP' folder in a web application

    - by chrisjleu
    What is a simple way to resolve the path to a JSP file that is not located in the root JSP directory of a web application using SpringMVCs viewResolvers? For example, suppose we have the following web application structure: web-app |-WEB-INF |-jsp |-secure |-admin.jsp |-admin2.jsp index.jsp login.jsp I would like to use some out-of-the-box components to resolve the JSP files within the jsp root folder and the secure subdirectory. I have a *-servlet.xml file that defines: an out-of-the-box, InternalResourceViewResolver: <bean id="jspViewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"></property> <property name="prefix" value="/WEB-INF/jsp/"></property> <property name="suffix" value=".jsp"></property> </bean> a handler mapping: <bean id="handlerMapping" class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <props> <prop key="/index.htm">urlFilenameViewController</prop> <prop key="/login.htm">urlFilenameViewController</prop> <prop key="/secure/**">urlFilenameViewController</prop> </props> </property> </bean> an out-of-the-box UrlFilenameViewController controller: <bean id="urlFilenameViewController" class="org.springframework.web.servlet.mvc.UrlFilenameViewController"> </bean> The problem I have is that requests to the JSPs in the secure directory cannot be resolved, as the jspViewResolver only has a prefix defined as /jsp/ and not /jsp/secure/. Is there a way to handle subdirectories like this? I would prefer to keep this structure because I'm also trying to make use of Spring Security and having all secure pages in a subdirectory is a nice way to do this. There's probably a simple way to acheive this but I'm new to Spring and the Spring MVC framework so any pointers would be appreciated.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >