Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 862/1121 | < Previous Page | 858 859 860 861 862 863 864 865 866 867 868 869  | Next Page >

  • MDE Access decrypt JDBC

    - by michelemarcon
    I want to perform JDBC SQL queries on a MDE Access file. I've set up the data source ODBC and everything worked well for a MDE file. Now, I'm working with a newer version of the MDE file, and here is the result: java.sql.SQLException: [Microsoft][Driver ODBC Microsoft Access] Cannont read record. Read authorization unavailable for "tbl_mytable". If I open the MDE with Access Runtime I am asked for a password, and after leaving a blank password I can see all the data. Of course, "tbl_mytable" does exist inside the database.

    Read the article

  • IIS Strategies for Accessing Secured Network Resources

    - by Emtucifor
    Problem: A user connects to a service on a machine, such as an IIS web site or a SQL Server database. The site or the database need to gain access to network resources such as file shares (the most common) or a database on a different server. Permission is denied. This is because the user the service is running as doesn't have network permissions in the first place, or if it does, it doesn't have rights to access the remote resource. I keep running into this problem over and over again and am tired of not having a really solid way of handling it. Here are some workarounds I'm aware of: Run IIS as a custom-created domain user who is granted high permissions If permissions are granted one file share at a time, then every time I want to read from a new share, I would have to ask a network admin to add it for me. Eventually, with many web sites reading from many shares, it is going to get really complicated. If permissions are just opened up wide for the user to access any file shares in our domain, then this seems like an unnecessary security surface area to present. This also applies to all the sites running on IIS, rather than just the selected site or virtual directory that needs the access, a further surface area problem. Still use the IUSR account but give it network permissions and set up the same user name on the remote resource (not a domain user, a local user) This also has its problems. For example, there's a file share I am using that I have full rights to for sharing, but I can't log in to the machine. So I have to find the right admin and ask him to do it for me. Any time something has to change, it's another request to an admin. Allow IIS users to connect as anonymous, but set the account used for anonymous access to a high-privilege one This is even worse than giving the IIS IUSR full privileges, because it means my web site can't use any kind of security in the first place. Connect using Kerberos, then delegate This sounds good in principle but has all sorts of problems. First of all, if you're using virtual web sites where the domain name you connect to the site with is not the base machine name (as we do frequently), then you have to set up a Service Principal Name on the webserver using Microsoft's SetSPN utility. It's complicated and apparently prone to errors. Also, you have to ask your network/domain admin to change security policy for the web server so it is "trusted for delegation." If you don't get everything perfectly right, suddenly your intended Kerberos authentication is NTLM instead, and you can only impersonate rather than delegate, and thus no reaching out over the network as the user. Also, this method can be problematic because sometimes you need the web site or database to have permissions that the connecting user doesn't have. Create a service or COM+ application that fetches the resource for the web site Services and COM+ packages are run with their own set of credentials. Running as a high-privilege user is okay since they can do their own security and deny requests that are not legitimate, putting control in the hands of the application developer instead of the network admin. Problems: I am using a COM+ package that does exactly this on Windows Server 2000 to deliver highly sensitive images to a secured web application. I tried moving the web site to Windows Server 2003 and was suddenly denied permission to instantiate the COM+ object, very likely registry permissions. I trolled around quite a bit and did not solve the problem, partly because I was reluctant to give the IUSR account full registry permissions. That seems like the same bad practice as just running IIS as a high-privilege user. Note: This is actually really simple. In a programming language of your choice, you create a class with a function that returns an instance of the object you want (an ADODB.Connection, for example), and build a dll, which you register as a COM+ object. In your web server-side code, you create an instance of the class and use the function, and since it is running under a different security context, calls to network resources work. Map drive letters to shares This could theoretically work, but in my mind it's not really a good long-term strategy. Even though mappings can be created with specific credentials, and this can be done by others than a network admin, this also is going to mean that there are either way too many shared drives (small granularity) or too much permission is granted to entire file servers (large granularity). Also, I haven't figured out how to map a drive so that the IUSR gets the drives. Mapping a drive is for the current user, I don't know the IUSR account password to log in as it and create the mappings. Move the resources local to the web server/database There are times when I've done this, especially with Access databases. Does the database have to live out on the file share? Sometimes, it was just easiest to move the database to the web server or to the SQL database server (so the linked server to it would work). But I don't think this is a great all-around solution, either. And it won't work when the resource is a service rather than a file. Move the service to the final web server/database I suppose I could run a web server on my SQL Server database, so the web site can connect to it using impersonation and make me happy. But do we really want random extra web servers on our database servers just so this is possible? No. Virtual directories in IIS I know that virtual directories can help make remote resources look as though they are local, and this supports using custom credentials for each virtual directory. I haven't been able to come up with, yet, how this would solve the problem for system calls. Users could reach file shares directly, but this won't help, say, classic ASP code access resources. I could use a URL instead of a file path to read remote data files in a web page, but this isn't going to help me make a connection to an Access database, a SQL server database, or any other resource that uses a connection library rather than being able to just read all the bytes and work with them. I wish there was some kind of "service tunnel" that I could create. Think about how a VPN makes remote resources look like they are local. With a richer aliasing mechanism, perhaps code-based, why couldn't even database connections occur under a defined security context? Why not a special Windows component that lets you specify, per user, what resources are available and what alternate credentials are used for the connection? File shares, databases, web sites, you name it. I guess I'm almost talking about a specialized local proxy server. Anyway, so there's my list. I may update it if I think of more. Does anyone have any ideas for me? My current problem today is, yet again, I need a web site to connect to an Access database on a file share. Here we go again...

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • XML output from MySQL

    - by NumberFour
    Hi, is there any chance of getting the output from a MySQL query directly to XML? Im referring to something like MSSQL has with SQL-XML plugin, for example: SELECT * FROM table WHERE 1 FOR XML AUTO returns text (or xml data type in MSSQL to be precise) which contains an XML markup structure generated according to the columns in the table. With SQL-XML there is also an option of explicitly defining the output XML structure like this: SELECT 1 AS tag, NULL AS parent, emp_id AS [employee!1!emp_id], cust_id AS [customer!2!cust_id], region AS [customer!2!region] FROM table FOR XML EXPLICIT which generates an XML code as follows: <employee emp_id='129'> <customer cust_id='107' region='Eastern'/> </employee> Do you have any clues how to achieve this in MySQL? Thanks in advance for your answers.

    Read the article

  • ExpressionEngine: {exp:query} producing error

    - by Josh Brown
    Hi. I have this code in place to pull some relation data from the database based on a ID of another weblog entry. The relationship custom field would not work for my situation. I have tested the SQL with MySQL and get no errors. But I get an error when I pull up the page. {exp:query sql="SELECT entry_id,field_id_16,field_id_19 FROM exp_weblog_data WHERE field_id_15='2' AND weblog_id='6'"} Also, I want to populate the “field_id_15” from the url. I have tried using the “segment_*” tag, but nothing. Maybe it is something else. Any help is greatly appreciated. Thanks, Josh

    Read the article

  • Attach and upgrade TFS 2005 databases to a TFS 2008 installation

    - by henriksen
    I have a Windows server 2008 with SQL 2008 and TFS 2008 installed. And I have another box with SQL 2005 and TFS 2005 installed. Does anyone know of a way I can just transfer the databases (or data in another way) from TFS 2005 to TFS 2008. Any data on the 2008 box can be deleted. The machines are not in a domain so all accounts are local. So I'll have to fix that afterwards, but that's managable. Basically, can I attach the databases and run an upgrade command?

    Read the article

  • asp.net InsertCommand to return latest insert ID

    - by Stijn Van Loo
    Dear all, I'm unable to retrieve the latest inserted id from my SQL Server 2000 db using a typed dataset in asp.NET I have created a tableadapter and I ticked the "Refresh datatable" and "Generate Insert, Update and Delete statements". This auto-generates the Fill and GetData methods, and the Insert, Update, Select and Delete statements. I have tried every possible solution in this thread http://forums.asp.net/t/990365.aspx but I'm still unsuccesfull, it always returns 1(=number of affected rows). I do not want to create a seperate insert method as the auto-generated insertCommand perfectly suits my needs. As suggested in the thread above, I have tried to update the InsertCommand SQL syntax to add SELECT SCOPY_IDENTITY() or something similar, I have tried to add a parameter of type ReturnValue, but all I get is the number of affected rows. Does anyone has a different take on this? Thanks in advance! Stijn

    Read the article

  • How to connect an existing wizard generated data set to a different server(same database) at run tim

    - by Kiril
    Hello, I am coding a simple space empire management game in Visual C# 2008, which relies on connecting to a remote SQL server database to get/store data. I would like the user to be able to connect to a user-specified SQL server from the login screen(he specifies IP address, port, database name, ID, password and presses "connect" button). However, I found out that the Dataset connection string property is read only and cannot be changed. Is there any way to guide the wizard-generated DataSet to a user-specified server at run time? Thanks in advance.

    Read the article

  • confusion about transactions and msdtc

    - by muhan
    I have some basic confusion about how transactions and msdtc work together. I have a basic server/client winforms app. The app uses transactionscope to encapsulate several sql commands that are executed on the sql server. The app seemed to work fine when I enabled msdtc network access on the server only. Then one day it stopped working saying network access was not enabled. Now it seems that I have to enable msdtc network access on both the client computer and server for transactionscope to work. Does the client or server msdtc service do the transaction work? Or maybe its both? Does anyone have guidance on whether msdtc network access is needed on both client and server or just server?

    Read the article

  • Connect rails application to MsSQL 2005 from Windows

    - by Enrico Carlesso
    Hi guys. I (sadly) have to deploy a rails application on Windows XP which has to connect to Microsoft SQL Server 2005. Surfing in the web there are a lot of hits for connect from Linux to MsSQL, but cannot find out how to do it from Windows. Basically I followed these steps: Install dbi gem Install activerecord-sql-server-adapter gem My database.yml now looks like this: development: adapter: sqlserver mode: odbc dsn: test_dj host: HOSTNAME\SQLEXPRESS database: test_dj username: guest password: guest But I'm unable to connect it. When I run rake db:migrate I get IM002 (0) [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified I'm not a Windows user, so cannot understand really well the meaning of dsn element or so. Does someone have an idea how to solve this? Thanks in advance

    Read the article

  • Establishing connection from java to access 2010 64bit

    - by user993250
    after going through several tutorials and blog posts, i have unsuccessfully tried to set up a connection from java to a microsoft access database. my system specifications are win 7 [64 bit] odbcad32.ese [64 bit] access 2010 [64 bit] jre6 [64 bit] the code that i wrote for establishing a connection public Connection makeConn() throws ClassNotFoundException, SQLException { Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); conn = DriverManager.getConnection("jdbc:odbc:Driver={Microsoft Access Driver (*.mdb)};DBQ=C:\\Users\\ARIFAH\\Desktop\\sampleDb.mdb"); return conn; } in the data source [odbc]---[located at path %windir%\system32\odbcad32.exe] i performed the following task USER DNS TAB ADD-Microsoft Access Driver (*.mdb, *.accdb ) [file ACEODBC.dll] -Finish Data Source Name: TestDriver ; Description: test driver for access 2010 64 bit ; DataBase-select -browse: C:\Users\ARIFAH\Desktop\sampleDb.mdb - ok apply ok now when i try to run my application, this is the error that i m getting, java.sql.SQLException: [Microsoft][ODBC Microsoft Access Driver] Could not find file '(unknown)'. at sun.jdbc.odbc.JdbcOdbc.createSQLException(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.standardError(Unknown Source) at sun.jdbc.odbc.JdbcOdbc.SQLDriverConnect(Unknown Source) at sun.jdbc.odbc.JdbcOdbcConnection.initialize(Unknown Source) at sun.jdbc.odbc.JdbcOdbcDriver.connect(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) at dataBase.connection.makeConn(connection.java:22) at newmodulewizrd.ui.App.<init>(App.java:32) at archetypedcomponent.commands.newModuleHandler.execute(newModuleHandler.java:39) at org.eclipse.ui.internal.handlers.HandlerProxy.execute(HandlerProxy.java:294) at org.eclipse.core.commands.Command.executeWithChecks(Command.java:476) at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:508) at org.eclipse.ui.internal.handlers.HandlerService.executeCommand(HandlerService.java:169) at org.eclipse.ui.internal.handlers.SlaveHandlerService.executeCommand(SlaveHandlerService.java:241) at org.eclipse.ui.internal.handlers.SlaveHandlerService.executeCommand(SlaveHandlerService.java:241) at org.eclipse.ui.menus.CommandContributionItem.handleWidgetSelection(CommandContributionItem.java:770) at org.eclipse.ui.menus.CommandContributionItem.access$10(CommandContributionItem.java:756) at org.eclipse.ui.menus.CommandContributionItem$5.handleEvent(CommandContributionItem.java:746) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1003) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:3910) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3503) at org.eclipse.ui.internal.Workbench.runEventLoop(Workbench.java:2405) at org.eclipse.ui.internal.Workbench.runUI(Workbench.java:2369) at org.eclipse.ui.internal.Workbench.access$4(Workbench.java:2221) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:500) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:493) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:149) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:113) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:194) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:368) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:179) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:559) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:514) at org.eclipse.equinox.launcher.Main.run(Main.java:1311) at org.eclipse.equinox.launcher.Main.main(Main.java:1287) *can somebody help with it??? *

    Read the article

  • Attempt to set foreign key to nonexistent table, sf_guard_user!

    - by Stick it to THE MAN
    I am using SF 1.3.2 with Propel ORM on Ubuntu 9.10. I created a new SF project and simply copied the sfGuard plugin folder from a pre-existing SF 1.32 project. I manually edited the schema.yml in the new project, adding a user table that reference the the sfGuard table. When I run propel:build-sql, I got the task failed, with the error message: Attempt to set foreign key to nonexistent table, sf_guard_user! I'm not sure why this error is occurring, (the error dosent make sense to me, since the sfGuard plugin works fine with the previous project, with no changes made. what am I missing?? BTW, I have not created any apps in the project yet. I am just running the build-sql task first, to make sure that Propel has no problems with parsing my yml files etc.

    Read the article

  • DBD::Advantage and 64-bit Perl - Always 6060

    - by WarheadsSE
    I realize that I am attempting to go beyond the "supported" behavior of the manf's released drivers for perl, after all they have only released it in package with x86 .so's. However, since I cannot use their package with x64 Perl on a RHEL 5.4 x86_64 box, and maintaining a seperate install of x86 perl just for this one package, I have made an attempt to get this puppy working thanks to released 64-bit .so's that accompany other driver packages for Advantage. What I have done to this point: download beta 10 DBI drivers, in 32 download beta 10 PHP extension (it contains 32 and x86_64) copy the required DLLs into the ads-lib location (eg /usr/local/ads/lib64) compile the Perl DBI driver with the path to the lib64's .so's Good compilation, good install, good use. The problem is that I always get : failed: [iAnywhere Solutions][Advantage SQL][ASA] Error 6060: Advantage Database Server not available on specified server. axServerConnect (SQL-HY000)(DBD: db_login/SQLConnect err=-1) Does anyone have any ideas? EDIT: fixed package name in post title

    Read the article

  • Using Hibernate to do a query involving two tables

    - by Nathan Spears
    I'm inexperienced with sql in general, so using Hibernate is like looking for an answer before I know exactly what the question is. Please feel free to correct any misunderstandings I have. I am on a project where I have to use Hibernate. Most of what I am doing is pretty basic and I could copy and modify. Now I would like to do something different and I'm not sure how configuration and syntax need to come together. Let's say I have two tables. Table A has two (relevant) columns, user GUID and manager GUID. Obviously managers can have more than one user under them, so queries on manager can return more than one row. Additionally, a manager can be managing the same user on multiple projects, so the same user can be returned multiple times for the same manager query. Table B has two columns, user GUID and user full name. One-to-one mapping there. I want to do a query on manager GUID from Table A, group them by unique User GUID (so the same User isn't in the results twice), then return those users' full names from Table B. I could do this in sql without too much trouble but I want to use Hibernate so I don't have to parse the sql results by hand. That's one of the points of using Hibernate, isn't it? Right now I have Hibernate mappings that map each column in Table A to a field (well the get/set methods I guess) in a DAO object that I wrote just to hold that Table's data. I could also use the Hibernate DAOs I have to access each table separately and do each of the things I mentioned above in separate steps, but that would be less efficient (I assume) that doing one query. I wrote a Service object to hold the data that gets returned from the query (my example is simplified - I'm going to keep some other data from Table A and get multiple columns from Table B) but I'm at a loss for how to write a DAO that can do the join, or use the DAOs I have to do the join. FYI, here is a sample of my hibernate config file (simplified to match my example): <hibernate-mapping package="com.my.dao"> <class name="TableA" table="table_a"> <id name="pkIndex" column="pk_index" /> <property name="userGuid" column="user_guid" /> <property name="managerGuid" column="manager_guid" /> </class> </hibernate-mapping> So then I have a DAOImplementation class that does queries and returns lists like public List<TableA> findByHQL(String hql, Map<String, String> params) etc. I'm not sure how "best practice" that is either.

    Read the article

  • Pagination links broken - php/jquery

    - by ClarkSKent
    Hey, I'm still trying to get my pagination links to load properly dynamically. But I can't seem to find a solution to this one problem. vote down star Hi everyone, I am still trying to figure out how to fix my pagination script to work properly. the problem I am having is when I click any of the pagination number links to go the next page, the new content does not load. literally nothing happens and when looking at the console in Firebug, nothing is sent or loaded. I have on the main page 3 links to filter the content and display it. When any of these links are clicked the results are loaded and displayed along with the associated pagination numbers for that specific content. I believe the problem is coming from the sql query in generate_pagination.php (seen below). When I hard code the sql category part it works, but is not dynamic at all. This is why I'm calling $ids=$_GET['ids']; and trying to put that into the category section but then the numbers don't display at all. If I echo out the $ids variable and click on a filter it does display the correct name/id, so I don't know why this doesn't work Here is the main page so you can see how I am including and starting the function(I'm new to php): <?php include_once('generate_pagination.php'); ?> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.1/jquery.min.js"></script> <script type="text/javascript" src="jquery_pagination.js"></script> <div id="loading" ></div> <div id="content" data-page="1"></div> <ul id="pagination"> <?php //Pagination Numbers for($i=1; $i<=$pages; $i++) { echo '<li class="page_numbers" id="page'.$i.'">'.$i.'</li>'; } ?> </ul> <br /> <br /> <a href="#" class="category" id="marketing">Marketing</a> <a href="#" class="category" id="automotive">Automotive</a> <a href="#" class="category" id="sports">Sports</a> Here is the generate pagination where the problem seems to occur: <?php $ids=$_GET['ids']; include_once('config.php'); $per_page = 3; //Calculating no of pages $sql = "SELECT COUNT(*) FROM explore WHERE category='$ids'"; $result = mysql_query($sql); $count = mysql_fetch_row($result); $pages = ceil($count[0]/$per_page); ?> I thought I might as well post the jquery script if someone wants to see: $(document).ready(function(){ //Display Loading Image function Display_Load() { $("#loading").fadeIn(900,0); $("#loading").html("<img src='bigLoader.gif' />"); } //Hide Loading Image function Hide_Load() { $("#loading").fadeOut('slow'); }; //Default Starting Page Results $("#pagination li:first").css({'color' : '#FF0084'}).css({'border' : 'none'}); Display_Load(); $("#content").load("pagination_data.php?page=1", Hide_Load()); // Editing below. // Sort content Marketing $("a.category").click(function() { Display_Load(); var this_id = $(this).attr('id'); $.get("pagination.php", { category: this.id }, function(data){ //Load your results into the page var pageNum = $('#content').attr('data-page'); $("#pagination").load('generate_pagination.php?category=' + pageNum +'&ids='+ this_id ); $("#content").load("filter_marketing.php?page=" + pageNum +'&id='+ this_id, Hide_Load()); }); }); //Pagination Click $("#pagination li").click(function(){ Display_Load(); //CSS Styles $("#pagination li") .css({'border' : 'solid #dddddd 1px'}) .css({'color' : '#0063DC'}); $(this) .css({'color' : '#FF0084'}) .css({'border' : 'none'}); //Loading Data var pageNum = $(this).attr("id").replace("page",""); $("#content").load("pagination_data.php?page=" + pageNum, function(){ $(this).attr('data-page', pageNum); Hide_Load(); }); }); }); If any could assist me on solving this problem that would be great, thanks.

    Read the article

  • How hard is it to modify the Django Models?

    - by alex
    I am doing geolocation, and Django does not have a PointField. So, I am forced to writing in RAW SQL. GeoDjango, the Django library, does not support the following query for MYSQL databases (can someone verify that for me?) cursor.execute("SELECT id FROM l_tag WHERE\ (GLength(LineStringFromWKB(LineString(asbinary(utm),asbinary(PointFromWKB(point(%s, %s)))))) < %s + accuracy + %s)\ I don't nkow why GeoDjango library cannot do this in MYSQL database. I hate writing RAW SQL for calculating distances between two points. Is there a way I can create my own library for Django that can handle this? If so, how hard is it?

    Read the article

  • In cakePHP, how to retrieve joined result from multiple tables

    - by Manish Sharma
    Hi, can anyone tell me, how to retrieve joined result from multiple tables in cakePHP ( using cakePHP mvc architecture). For example, I have three tables to join (tbl_topics, tbl_items, tbl_votes. Their relationship is defined as following: a topic can have many items and an item can have many votes. Now I want to retrieve a list of topics with the count of all votes on all items for each topic. The SQL query for this is written below: SELECT Topic.*, count(Vote.id) voteCount FROM tbl_topics AS Topic LEFT OUTER JOIN tbl_items AS Item ON (Topic.id = Item.topic_id) LEFT OUTER JOIN tbl_votes AS Vote ON (Item.id = Vote.item_id); My problem is I can do it easily using $this-><Model Name>->query function, but this requires sql code to be written in the controller which I don't want. I'm trying to find out any other way to do this (like find()).

    Read the article

  • Problem with SqlServer 2005 when openning connections

    - by Jose Obregon
    I have a Winforms application and I use EntLib to connect to a SQL Server 2005 DB. The application is working ok, but somethings, and lately more often, we have started receiving this error from the db when openning the connection: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) The problem is intermittent. The user works nice for a couple of hours and then suddenly the exception is thrown. Sometimes it happens when we run a small process that loads a file and then inserts the data to the db. Please if anybody has any thought on this help me

    Read the article

  • asp.net Dynamic Data Site with custom own MetaData

    - by loviji
    Hello, I'm searching info about configuring own MetaData in asp.NET Dynamic Site. For example. I have a table in MS Sql Server with structure shown below: CREATE TABLE [dbo].[someTable]( [id] [int] NOT NULL, [pname] [nvarchar](20) NULL, [FullName] [nvarchar](50) NULL, [age] [int] NULL) and I there are 2 Ms Sql tables (I've created), sysTables and sysColumns. sysTables: ID sysTableName TableName TableDescription 1 | someTable |Persons |All Data about Persons in system sysColumns: ID TableName sysColumnName ColumnName ColumnDesc ColumnType MUnit 1 |someTable | sometable_pname| Name | Persona Name(ex. John)| nvarchar(20) | null 2 |someTable | sometable_Fullname| Full Name | Persona Name(ex. John Black)| nvarchar(50) | null 3 |someTable | sometable_age| age | Person age| int | null I want that, in Details/Edit/Insert/List/ListDetails pages use as MetaData sysColumns and sysTableData. Because, for ex. in DetailsPage fullName, it is not beatiful as Full Name . someIdea, is it possible? thanks

    Read the article

  • Kendo UI, searchable Combobox

    - by user2083524
    I am using the free Kendo UI Core Framework. I am looking for a searchable Combobox that fires the sql after inserting, for example, 2 letters. Behind my Listbox there are more than 10000 items and now it takes too much time when I load or refresh the page. Is it possible to trigger the sql query only by user input like the autocomplete widget do? My code is: <link href="test/styles/kendo.common.min.css" rel="stylesheet" /> <link href="test/styles/kendo.default.min.css" rel="stylesheet" /> <script src="test/js/jquery.min.js"></script> <script src="test/js/kendo.ui.core.min.js"></script> <script> $(document).ready(function() { var objekte = $("#objekte").kendoComboBox({ placeholder: "Objekt auswählen", dataTextField: "kurzname", dataValueField: "objekt_id", minLength: 2, delay: 0, dataSource: new kendo.data.DataSource({ transport: { read: "test/objects.php" }, schema: { data: "data" } }), }).data("kendoComboBox"); </script>

    Read the article

  • Managing Team Development with SSAS, TFS, & BIDS

    - by Kevin D. White
    I am currently a single BI developer over a corporate datawarehouse and cube. I use SQL Server 2008, SSAS, and SSIS as my basic toolkit. I use Visual Studio +BIDS and TFS for my IDE and source control. I am about to take on multiple projects with an offshore vendor and I am worried about managing change. My major concern is manging merges and changes between me and the offshore team. Merging and managing changes to SQL & XML for just one person is bad enough but with multiple developers it seems like a nightmare. Any thoughts on how best to structure development knowing that sometimes there is no way to avoid multiple individuals making changes to the same file?

    Read the article

  • Fast search in XMl files in .NET (or How to index XML files)

    - by codymanix
    I have to implement a search feature which is able to quickly perform arbitrary complex queries to XML-data. If the user makes a query, all XML files must be searched to find possible matches. The users will have lots of XML-Files (a few 10000 or more) which are typically a few kilobytes in size. All the XML-files have almost the same structure. I already benchmarked XPath, it is too slow for my needs. How can it be done most efficiently? Is is possible to create indexes for the contents of the XML files (preserving content semantics, not just plain fulltext search)? Will it be useful to put the XML data into an (embedded) SQL database and do the queries with SQL? What other possibilities do I have?

    Read the article

< Previous Page | 858 859 860 861 862 863 864 865 866 867 868 869  | Next Page >