Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 339/617 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Are fopen/fread/fgets PID-safe in C ?

    - by Jane
    Various users are browsing through a website 100% programmed in C (CGI). Each webpage uses fopen/fgets/fread to read common data (like navigation bars) from files. Would each call to fopen/fgets/fread interefere with each other if various people are browsing the same page ? If so, how can this be solved in C ? (This is a Linux server, compiling is done with gcc and this is for a CGI website programmed in C.) Example: FILE *DATAFILE = fopen(PATH, "r"); if ( DATAFILE != NULL ) { while ( fgets( LINE, BUFFER, DATAFILE ) ) { /* do something */ } }

    Read the article

  • Configuration Log4j: ConsoleAppender to System.err?

    - by Gerard
    I saw that one of our tools uses a ConsoleAppender to System.err next to System.out in it's log4j configuration. Fragments of the configuration: <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> <!-- Log to STDOUT. --> <param name="Target" value="System.out"/> .... <appender name="CONSOLE_ERR" class="org.apache.log4j.ConsoleAppender"> <!-- Log to STDERR. --> <param name="Target" value="System.err"/> In Eclipse this results in double messages to the console, so I believe that is of no use, right? On the Linux server I see only one message to the PuTTY console, so where would that System.err message go to?

    Read the article

  • How are vector patterns used in syntax-rules?

    - by Jay
    Hi, I have been writing Common Lisp macros, so Scheme's R5Rs macros are a bit unnatural to me. I think I got the idea, except that I don't understand how one would use vector patterns in syntax-rules: (define-syntax mac (syntax-rules () ((mac #(a b c d)) (let () (display a) (newline) (display d) (newline))))) (expand '(mac #(1 2 3 4))) ;; Chicken's expand-full extension shows macroexpansion => (let746 () (display747 1) (newline748) (display747 4) (newline748)) I don't see how I'd use a macro that requires its arguments to be written as a vector: (mac #(1 2 3 4)) => 1 4 Is there some kind of technique that uses those patterns? Thank you!

    Read the article

  • How to use Application.OnTime to call a macro at a set time everyday, without having to close workbook

    - by Shayne K
    I have written a macro that uses Application.OnTime that works if I manually execute the macro. I'm trying to automate this process so I don't have to write Application.OnTime in "This Workbook" or (Private Sub Workbook_Open() Most of you do this because you can have windows scheduler open the workbook at a certain time which starts the macros on open. I CANNOT USE SCHEDULER. Because I am not able to use windows scheduler I will keep the workbook open and the timer should refresh my data then Call "my Macro" at a certain time everyday. Where do I place this code, and how do I set an auto timer?

    Read the article

  • Multithreading in Lua

    - by RCIX
    I was having a discussion with my friend the other day. I was saying how that, in pure Lua, you couldn't build a preemptive multitasking system. He claims you can, because of the following reason: Both C and Lua have no inbuilt threading libraries [OP's note: well, Lua technically does, but AFAIK it's not useful for our purposes]. Windows, which is written in mostly C(++) has pre-emptive multitasking, which they built from scratch. Therefore, you should be able to do the same in Lua. The big problem i see with that is that the main way preemptive multitasking works (to my knowledge) is that it generates regular interrupts which the manager uses to get control and determine what code it should be working on next. I also don't think Lua has any facility that can do that. My question is: is it possible to write a pure-Lua library that allows people to have pre-emptive multitasking?

    Read the article

  • Can an app use the clipboard for its own purposes? (read: who owns the clipboard?)

    - by eran
    In PowerBuilder's IDE, the code autocomplete feature uses the clipboard to communicate the completed text to the code window. By doing so, it overrides whatever was stored on the clipboard before. So, if you had the winning numbers of the next lottary stored on your clipboard, and you used the autocomplete to turn m_goodfor into m_goodfornothing, you've just lost your only chance of ever getting rich, and you're left with nothing on your clipboard. Features like that are the reason I hate software. It looks like it was implemented by some intern that noone was looking after. However, there's also a chance I got all worked up for nothing, and making such use of the clipboard is absolutely legit. So, can an app use the clipboard for its own purposes? Who is considered the owner of the clipboard? (Bonus votes to whoever puts himself in place of the feature's programmer, and provides some reasoning for this being done on purpose, assuming the users would actually benefite from it)

    Read the article

  • GUI-Library for microcontroller

    - by Martin Kirsche
    I want to create a GUI driven application for a micro-controller (Atmel XMEGA) that is connected to a 128x64 dots graphics LCD (EA DOGL128-6) and 4 buttons for navigation. Controlling the display itself (e.g. drawing pixels and characters) is no problem but in order to prevent me from reinventing the wheel I was googling for a GUI-Library/-Toolkit that is written in c, includes its source code, will run on a 32 MHz 8-bit micro-controller and provides at least the following controls: panel (to group elements) menu (scrollable) icon label button line-graph (optional) But I didn't find any thing useful. Does anyone know (or better uses) such a library(preferably for free)?

    Read the article

  • Forms bound to updateable ADO recordsets are not updateable when the source includes a JOIN

    - by Art
    I'm developing an application in Access 2007. It uses an .accdb front end connecting to an SQL Server 2005 backend. I use forms that are bound to ADO recordsets at runtime. For the sake of efficiency, the recordsets usually contain only one record, and are queried out on the server: Public Sub SetUpFormRecordset(cn As ADODB.Connection, rstIn As ADODB.Recordset, rstSource As String) Dim cmd As ADODB.Command Dim I As Long Set cmd = New ADODB.Command cn.Errors.Clear ' Recordsets based on command object Execute method are Read Only! With cmd Set .ActiveConnection = cn .CommandType = adCmdText .CommandText = rstSource End With With rstIn .CursorType = adOpenKeyset .LockType = adLockPessimistic 'Check the locktype after opening; optimistic locking is worthless on a bound End With ' form, and ADO might open optimistically without firing an error! rstIn.Open cmd, , adOpenKeyset, adLockPessimistic 'This should run the query on the server and return an updatable recordset With cn If .Errors.Count <> 0 Then For Each errADO In .Errors Call HandleADOErrors(.Errors(I)) I = I + 1 Next errADO End If End With End Sub rstSource (the string containg the TSQL on which the recordset is based) is assembled by the calling routine, in this case from the Open event of the form being bound: Private Sub Form_Open(Cancel As Integer) Dim rst As ADODB.Recordset Dim strSource As String, DefaultSource as String Dim lngID As Long lngID = Forms!MyParent.CurrentID strSource = "SELECT TOP (100) PERCENT dbo.Customers.CustomerID, dbo.Customers.LegacyID, dbo.Customers.Active, dbo.Customers.TypeID, dbo.Customers.Category, " & _ "dbo.Customers.Source, dbo.Customers.CustomerName, dbo.Customers.CustAddrID, dbo.Customers.Email, dbo.Customers.TaxExempt, dbo.Customers.SalesTaxCode, " & _ "dbo.Customers.SalesTax2Code, dbo.Customers.CreditLimit, dbo.Customers.CreationDate, dbo.Customers.FirstOrder, dbo.Customers.LastOrder, " & _ "dbo.Customers.nOrders, dbo.Customers.Concurrency, dbo.Customers.LegacyLN, dbo.Addresses.AddrType, dbo.Addresses.AddrLine1, dbo.Addresses.AddrLine2, " & _ "dbo.Addresses.City, dbo.Addresses.State, dbo.Addresses.Country, dbo.Addresses.PostalCode, dbo.Addresses.PhoneLandline, dbo.Addresses.Concurrency " & _ "FROM dbo.Customers INNER JOIN " & _ "dbo.Addresses ON dbo.Customers.CustAddrID = dbo.Addresses.AddrID " strSource = strSource & "WHERE dbo.Customers.CustomerID= " & lngID With Me 'Default is Set up for editing one record If Not Nz(.RecordSource, vbNullString) = vbNullString Then If .Dirty Then .Dirty = False 'Save any changes on the form .RecordSource = vbNullString End If If rst Is Nothing Then 'Might not be first time through DefaultSource = .RecordSource Else rst.Close Set rst = Nothing End If End With Set rst = New ADODB.Recordset Call setupformrecordset(dbconn, rst, strSource) 'dbconn is a global variable With Me Set .Recordset = rst End With End Sub The recordset that is returned from setupformrecordset is fully updateable, and its .Supports property shows this. It can be edited and updated in code. The entire form, however, is read only, even though it's .AllowEdits and .AllowAdditions properties are both true. Even the fields from the right hand side (the 'many' side) cannot be edited. Removing the INNER JOIN clause from the TSQL (restricting strSource to one table) makes the form fully editable. I've verified that the TSQL includes priimary key fields from both tables, and each table includes a timestamp field for concurrency. I tried changing the .CursorType and .CursorLocation properties of the recordset to no avail. What am I doing wrong?

    Read the article

  • How can I find out which version of emacs introduced a function?

    - by Chris R
    I want to write a .emacs that uses as much of the mainline emacs functionality as possible, falling back gracefully when run under previous versions. I've found through trial and error some functions that didn't exist, for example, in emacs 22 but now do in emacs 23 on the rare occasion that I've ended up running my dotfiles under emacs 22. However, I'd like to take a more proactive approach to this, and have subsets of my dotfiles that only take effect when version = <some-threshold> (for example). The function I'm focusing on right now is scroll-bar-mode but I'd like a general solution. I have not seen a consistent source for this info; I've checked the gnu.org online docs, the function code itself, and so far nothing. How can I determine this, without keeping every version of emacs I want to support kicking around?

    Read the article

  • Powerbuilder : How to compare the old value and new value of a column in data window

    - by Archangel
    Suppose I have a datawindow object which is attached to a datawindow control named 'dw_detail". This object uses grid presentation style and has a database column named 'found'. Now when a user modifies that column's value, I want to compare it with the original value that was fetched from the database. I know I can access the value of that column as 'dw_detail.object.found[row_no]'. Now I am trying to access the original value of the column as 'dw_detail.object.found.original[row_no]', but it is not working. It is not giving any compiling error, but when I debugged, 'dw_detail.object.found.original[row_no]' contains no values. How can I access the original value of that column?

    Read the article

  • Calling C function from DTrace scripts

    - by dmeister
    DTrace is impressive, powerful tracing system originally from Solaris, but it is ported to FreeBSD and Mac OSX. DTrace uses a high-level language called D not unlike AWK or C. Here is an example: io:::start /pid == $1/ { printf("file %s offset %d size %d block %llu\n", args[2]->fi_pathname, args[2]->fi_offset, args[0]->b_bcount, args[0]->b_blkno); } Using the command line sudo dtrace -q -s <name>.d <pid> all IOs originated from that process are logged. My question is if and how it is possible to call custom C functions from a DTrace script to do advanced operations with that tracing data during the tracing itself.

    Read the article

  • How to escape simple SQL queries in C# for SqlServer

    - by sri
    I use an API that expects a SQL string. I take a user input, escape it and pass it along to the API. The user input is quiet simple. It asks for column values. Like so: string name = userInput.Value; Then I construct a SQL query: string sql = string.Format("SELECT * FROM SOME_TABLE WHERE Name = '{0}'", name.replace("'", "''")); Is this safe enough? If it isn't, is there a simple library function that make column values safe: string sql = string.Format("SELECT * FROM SOME_TABLE WHERE Name = '{0}'", SqlSafeColumnValue(name)); The API uses SQLServer as the database. Thanks.

    Read the article

  • Returning a JSON view in combination with a boolean

    - by Rody van Sambeek
    What i would like to accomplish is that a partiel view contains a form. This form is posted using JQuery $.post. After a successfull post javascript picks up the result and uses JQuery's html() method to fill a container with the result. However now I don't want to return the Partial View, but a JSON object containing that partial view and some other object (Success - bool in this case). I tried it with the following code: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(int id, Item item) { if (ModelState.IsValid) { try { // ... return Json(new { Success = true, PartialView = PartialView("Edit", item) }); } catch(Exception ex) { // ... } } return Json(new { Success = false, PartialView = PartialView("Edit", item) }); } However I don't get the HTML in this JSON object and can't use html() to show the result. I tried using this method to render the partial as Html and send that. However this fails on the RenderControl(tw) method with a: The method or operation is not implemented.

    Read the article

  • Django save, column specified twice

    - by realshadow
    Hey, I would like to save a modified model, but I get Programming error - that a field is specified twice. class ProductInfo(models.Model): product_info_id = models.AutoField(primary_key=True) language_id = models.IntegerField() product_id = models.IntegerField() description = models.TextField(blank=True) status = models.IntegerField() model = models.CharField(max_length=255, blank=True) label = models.ForeignKey(Category_info, verbose_name = 'Category_info', db_column = 'language_id', to_field = 'node_id', null = True) I get this error because the foreign key uses as db_column language_id. If I will delete it, my object will be saved properly. I dont quite understand whats going on and since I have defined almost all of my models this way, I fear its totally wrong or maybe I just missunderstood it... Any ideas? Regards

    Read the article

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • Problem with HSQLDB & SequenceGenerator

    - by Srirangan
    Hi, I have an entity which has an ID field: @Id @Column(name = "`U##ID_VOIE`") @GeneratedValue(generator = "VOIE_SEQ") private String id; The class has the sequence generator defined as well: @SequenceGenerator(name = "VOIE_SEQ", sequenceName = "VOIE_SEQ") and the Oracle schema has the requisite sequence present. Everything works okay. We also have tests, which uses an in-memory HSQLDB. Before running the tests, all the tables are created based on the Hibernate entity classes. However the table for this particular class is not being created. And error pops up, because ID is a String and the SequenceGenerator in HSQLDB returns an INT / LONG / Numeric value. The application is using a legacy Oracle database and ID_VOIE column must remain a String / Varchar. Any solutions?

    Read the article

  • Memory in Eclipse

    - by user247866
    I'm getting the java.lang.OutOfMemoryError exception in Eclipse. I know that Eclipse by default uses heap size of 256M. I'm trying to increase it but nothing happens. For example: eclipse -vmargs -Xmx16g -XX:PermSize=2g -XX:MaxPermSize=2g I also tried different settings, using only the -Xmx option, using different cases of g, G, m, M, different memory sizes, but nothing helps. Does not matter which params I specify, the heap exception is thrown at the same time, so I assume there's something I'm doing wrong that Eclipse ignores the -Xmx parameter. I'm using a 32GB RAM machine and trying to execute something very simple such as: double[][] a = new double[15000][15000]; It only works when I reduce the array size to something around 10000 on 10000. I'm working on Linux and using the top command I can see how much memory the Java process is consuming; it's less than 2%. Thanks!

    Read the article

  • How to configure tomahawk and trinidad to work together

    - by ashtaganesh
    Hi, I am new to JSF. I want to use inputListOfValues component from Trinidad in my application which also uses Tomahawk. I have added the required jars for Trinidad and before getting inputListOfValues I tried one simple inputText to be printed on browser using Trinidad. I was not getting any configuration errors but it was not printing the corresponding text on browser. So I wonder if I can use tomahawk and trinidad together ? If yes, is there any configuration setting we need to do for this ? Any help will be greatly appreciated. Thanks, Ganesh.

    Read the article

  • Facebook connect integration (registration & login)

    - by nikospkrk
    Hi, I'm really struggling to make facebook connection working for my system. What I want to do : When the user is not yet registered on facebook : Fetch some user profile fields into my database (ideqally via my registration page already working for non facebook users) Log the user into my website Redirect the user into my homepage What I've done so far : Set up the application in FB Add the Facebook class from the github website and integrate some code to make it working Add additionnal paremeters to login/register facebook link. I'm struggling to redirect the user after authorizing, to my register page (register/?facebook). The "Post-Authorize Redirect URL" field doesn't seem to work properly, I maybe do not fill the right field? My other question is, if my registration page uses a redirection show (index.php redirect to register.php), do the information given by facebook through the $_POST method would be available in the register.php page ? I don't think so, do you ? Regards, Nicolas.

    Read the article

  • How to retrieve JSON Data Array from ExtJS Store

    - by user291928
    Is there a method allowing me to return my stored data in an ExtJS Grid Panel exactly the way I loaded it using: var data = ["value1", "value2"] Store.loadData(data); I would like to have a user option to reload the Grid, but changes to the store need to be taken into account. The user can make changes and the grid is dynamically updated, but if I reload the grid, the data that was originally loaded is shown even though the database has been updated with the new changes. I would prefer not to reload the page and just let them reload the grid data itself with the newly changed store. I guess im looking for something like this: var data = Store.getData(); //data = ["value1", "value2"] after its all said and done. Or is there a different way to refresh the grid with new data that I am not aware of. Even using the proxy still uses the "original" data, not the new store. Thanks in advance

    Read the article

  • Windows API GUI programming

    - by genesys
    Hi! I'm working on a Project with outdated, very old looking GUI (the used GUI framework is more than 10 years old) Since the used programming language is Eiffel, there are almost no good libraries for GUI development. Although Wrappers for C libraries exist, it's not that easy to wrap something like Qt with them. The current GUI framework uses the Windows API to create windows, widgets and so on. But as stated - it's very old. Now i would like to learn more about how to use the Windows API directly to create state of the art GUI's Can someone recommend any reading material? Thanks!

    Read the article

  • How to load the model ????

    - by rajesh
    How can i load model i have tried several times but do not get the ans my code is <?php class NotesController extends AppController { var $name='Notes'; var $helpers = array('Html','Form','Ajax','Javascript'); var $uses = array('note'); var $components = array('ModelLoader'); function index(){ $this->ModelLoader->setController($this); $result = $this->params['url']['obj']; //print_r($result); $ee=$this->ModelLoader->load('note'); $pass = $this->note->search($result);

    Read the article

  • Developing Schema Compare for Oracle (Part 2): Dependencies

    - by Simon Cooper
    In developing Schema Compare for Oracle, one of the issues we came across was the size of the databases. As detailed in my last blog post, we had to allow schema pre-filtering due to the number of objects in a standard Oracle database. Unfortunately, this leads to some quite tricky situations regarding object dependencies. This post explains how we deal with these dependencies. 1. Cross-schema dependencies Say, in the following database, you're populating SchemaA, and synchronizing SchemaA.Table1: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(Col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1(Col1)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); We need to do a rebuild of SchemaA.Table1 to change Col1 from a VARCHAR2(100) to a NUMBER. This consists of: Creating a table with the new schema Inserting data from the old table to the new table, with appropriate conversion functions (in this case, TO_NUMBER) Dropping the old table Rename new table to same name as old table Unfortunately, in this situation, the rebuild will fail at step 1, as we're trying to create a NUMBER column with a foreign key reference to a VARCHAR2(100) column. As we're only populating SchemaA, the naive implementation of the object population prefiltering (sticking a WHERE owner = 'SCHEMAA' on all the data dictionary queries) will generate an incorrect sync script. What we actually have to do is: Drop foreign key constraint on SchemaA.Table1 Rebuild SchemaB.Table1 Rebuild SchemaA.Table1, adding the foreign key constraint to the new table This means that in order to generate a correct synchronization script for SchemaA.Table1 we have to know what SchemaB.Table1 is, and that it also needs to be rebuilt to successfully rebuild SchemaA.Table1. SchemaB isn't the schema that the user wants to synchronize, but we still have to load the table and column information for SchemaB.Table1 the same way as any table in SchemaA. Fortunately, Oracle provides (mostly) complete dependency information in the dictionary views. Before we actually read the information on all the tables and columns in the database, we can get dependency information on all the objects that are either pointed at by objects in the schemas we’re populating, or point to objects in the schemas we’re populating (think about what would happen if SchemaB was being explicitly populated instead), with a suitable query on all_constraints (for foreign key relationships) and all_dependencies (for most other types of dependencies eg a function using another function). The extra objects found can then be included in the actual object population, and the sync wizard then has enough information to figure out the right thing to do when we get to actually synchronize the objects. Unfortunately, this isn’t enough. 2. Dependency chains The solution above will only get the immediate dependencies of objects in populated schemas. What if there’s a chain of dependencies? A.tbl1 -> B.tbl1 -> C.tbl1 -> D.tbl1 If we’re only populating SchemaA, the implementation above will only include B.tbl1 in the dependent objects list, whereas we might need to know about C.tbl1 and D.tbl1 as well, in order to ensure a modification on A.tbl1 can succeed. What we actually need is a graph traversal on the dependency graph that all_dependencies represents. Fortunately, we don’t have to read all the database dependency information from the server and run the graph traversal on the client computer, as Oracle provides a method of doing this in SQL – CONNECT BY. So, we can put all the dependencies we want to include together in big bag with UNION ALL, then run a SELECT ... CONNECT BY on it, starting with objects in the schema we’re populating. We should end up with all the objects that might be affected by modifications in the initial schema we’re populating. Good solution? Well, no. For one thing, it’s sloooooow. all_dependencies, on my test databases, has got over 110,000 rows in it, and the entire query, for which Oracle was creating a temporary table to hold the big bag of graph edges, was often taking upwards of two minutes. This is too long, and would only get worse for large databases. But it had some more fundamental problems than just performance. 3. Comparison dependencies Consider the following schema: SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); What will happen if we used the dependency algorithm above on the source & target database? Well, SchemaA.Table1 has a foreign key reference to SchemaB.Table1, so that will be included in the source database population. On the target, SchemaA.Table1 has no such reference. Therefore SchemaB.Table1 will not be included in the target database population. In the resulting comparison of the two objects models, what you will end up with is: SOURCE  TARGET SchemaA.Table1 -> SchemaA.Table1 SchemaB.Table1 -> (no object exists) When this comparison is synchronized, we will see that SchemaB.Table1 does not exist, so we will try the following sequence of actions: Create SchemaB.Table1 Rebuild SchemaA.Table1, with foreign key to SchemaB.Table1 Oops. Because the dependencies are only followed within a single database, we’ve tried to create an object that already exists. To fix this we can include any objects found as dependencies in the source or target databases in the object population of both databases. SchemaB.Table1 will then be included in the target database population, and we won’t try and create objects that already exist. All good? Well, consider the following schema (again, only explicitly populating SchemaA, and synchronizing SchemaA.Table1): SOURCE   TARGET CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100) PRIMARY KEY); CREATE TABLE SchemaC.Table1 ( Col1 NUMBER);   CREATE TABLE SchemaC.Table1 ( Col1 VARCHAR2(100) REFERENCES SchemaB.Table1); Although we’re now including SchemaB.Table1 on both sides of the comparison, there’s a third table (SchemaC.Table1) that we don’t know about that will cause the rebuild of SchemaB.Table1 to fail if we try and synchronize SchemaA.Table1. That’s because we’re only running the dependency query on the schemas we’re explicitly populating; to solve this issue, we would have to run the dependency query again, but this time starting the graph traversal from the objects found in the other database. Furthermore, this dependency chain could be arbitrarily extended.This leads us to the following algorithm for finding all the dependencies of a comparison: Find initial dependencies of schemas the user has selected to compare on the source and target Include these objects in both the source and target object populations Run the dependency query on the source, starting with the objects found as dependents on the target, and vice versa Repeat 2 & 3 until no more objects are found For the schema above, this will result in the following sequence of actions: Find initial dependenciesSchemaA.Table1 -> SchemaB.Table1 found on sourceNo objects found on target Include objects in both source and targetSchemaB.Table1 included in source and target Run dependency query, starting with found objectsNo objects to start with on sourceSchemaB.Table1 -> SchemaC.Table1 found on target Include objects in both source and targetSchemaC.Table1 included in source and target Run dependency query on found objectsNo objects found in sourceNo objects to start with in target Stop This will ensure that we include all the necessary objects to make any synchronization work. However, there is still the issue of query performance; the CONNECT BY on the entire database dependency graph is still too slow. After much sitting down and drawing complicated diagrams, we decided to move the graph traversal algorithm from the server onto the client (which turned out to run much faster on the client than on the server); and to ensure we don’t read the entire dependency graph onto the client we also pull the graph across in bits – we start off with dependency edges involving schemas selected for explicit population, and whenever the graph traversal comes across a dependency reference to a schema we don’t yet know about a thunk is hit that pulls in the dependency information for that schema from the database. We continue passing more dependent objects back and forth between the source and target until no more dependency references are found. This gives us the list of all the extra objects to populate in the source and target, and object population can then proceed. 4. Object blacklists and fast dependencies When we tested this solution, we were puzzled in that in some of our databases most of the system schemas (WMSYS, ORDSYS, EXFSYS, XDB, etc) were being pulled in, and this was increasing the database registration and comparison time quite significantly. After debugging, we discovered that the culprits were database tables that used one of the Oracle PL/SQL types (eg the SDO_GEOMETRY spatial type). These were creating a dependency chain from the database tables we were populating to the system schemas, and hence pulling in most of the system objects in that schema. To solve this we introduced blacklists of objects we wouldn’t follow any dependency chain through. As well as the Oracle-supplied PL/SQL types (MDSYS.SDO_GEOMETRY, ORDSYS.SI_COLOR, among others) we also decided to blacklist the entire PUBLIC and SYS schemas, as any references to those would likely lead to a blow up in the dependency graph that would massively increase the database registration time, and could result in the client running out of memory. Even with these improvements, each dependency query was taking upwards of a minute. We discovered from Oracle execution plans that there were some columns, with dependency information we required, that were querying system tables with no indexes on them! To cut a long story short, running the following query: SELECT * FROM all_tab_cols WHERE data_type_owner = ‘XDB’; results in a full table scan of the SYS.COL$ system table! This single clause was responsible for over half the execution time of the dependency query. Hence, the ‘Ignore slow dependencies’ option was born – not querying this and a couple of similar clauses to drastically speed up the dependency query execution time, at the expense of producing incorrect sync scripts in rare edge cases. Needless to say, along with the sync script action ordering, the dependency code in the database registration is one of the most complicated and most rewritten parts of the Schema Compare for Oracle engine. The beta of Schema Compare for Oracle is out now; if you find a bug in it, please do tell us so we can get it fixed!

    Read the article

  • How to center align background image in JPanel

    - by Jaguar
    I wanted to add background image to my JFrame. Background image means I can later add Components on the JFrame or JPanel Although I coudn't find how to add background image to a JFrame, I found out how to add background image to a JPanel from here: How to set background image in Java? This solved my problem, but now since my JFrame is resizable I want to keep the image in center. The code I found uses this method public void paintComponent(Graphics g) { //Draw the previously loaded image to Component. g.drawImage(img, 0, 0, null); //Draw image } Can anyone say how to align the image to center of the JPanel. As g.drawImage(img, 0, 0, null); provides x=0 and y=0 Also if there is a method to add background image to a JFrame then I would like to know. Thanks.

    Read the article

  • GetDC() DllImport for x64 apps

    - by devdept
    If you make a little research on the internet you'll see many DLLImport styles for this user32.dll function: HDC GetDC(HWND hWnd); The question is: what type is more appropriate for .NET x64 apps (either compiled with the Platform target as AnyCPU on a x64 machine or specifically as x64)? IntPtr for example grows to a size of 8 on a x64 process, can this be a problem? Is uint more appropriate than Uint64? What is the size of the pointers this function uses when used in a x64 process? The DLL is called user32.dll does it work as 32bit or 64bit on a x64 operating system? [DllImport("user32.dll",EntryPoint="GetDC")] public static extern IntPtr GetDC(IntPtr hWnd); public static extern uint GetDC(uint hWnd); public static extern int GetDC(int hWnd); Thanks!

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >