Search Results

Search found 9371 results on 375 pages for 'existing'.

Page 283/375 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Implementing a 2 Legged OAuth Provider

    - by Rob Wilkerson
    I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials. I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind... Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP? Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through? When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each? Thanks for your help.

    Read the article

  • flushing database cache in SWI-Prolog

    - by JPro
    We are using swi-prolog to run our testcases. Whenever the test starts, I am opening the connection to MYSQL database and storing the Name of the Test hat is being done and then closing the DB. These tests run for about 2 days continuously. After the tests are done, the results basically gets stored in folder in the server. There is a predicate in another prolog file that is called to update the results to the MYSQL database. The code is simple, I use odbc library and just call odbc_* predicates to connect and update the mysql by issuing direct queries. The actual problem is : If I try to call the Predicate from the same Prolog window, where the test just got completed, I get an error as updating to the DB server. Although I do not get any error in the connection. If I close the session of that prolog with halt and closing all the open prolog windows , then open an other complete new instance of Prolog and run the predicate the update goes well. I have a feeling that there is some connection reference to the MySQL DB in Prolog database. Is there any way to clear the database in prolog so that I can run the same predicate without closing any existing prolog windows? Any ideas appreciated. Thanks.

    Read the article

  • Create your own custom browser

    - by ShoX
    Hi, I want to shape my own browser or at least modify a existing one so far that it meets my needs. I want a fast browser (starting and running, not necessarily faster rendering) without any stuff I don't use and simple productive navigation (like Firefox + Vimperator + Tree Style Tab), only much more integrated into each other and a different GUI. I was thinking about just looking into the current two top browsers chrome and firefox (open-source wise) and branch my own smaller version out of it. By just using WebKit or Gecko I will have to implement all the Connection-stuff, too, but I really am not interested in doing that. So my questions are: Does it make sense to start off with a current browser and strip off certain features and the frontend and replace it with my own code? Chrome or Firefox? Which one is less complex? I don't care much about Plugins and Extensions, so they aren't they pretty much even in features otherwise? Thanks for your answers p.s.: It's a just-for-fun at-home project, so please no "just use the browsers..."-stuff...

    Read the article

  • Disable Razors default .cshtml handler in a ASP.NET Web Application

    - by mythz
    Does anyone know how to disable the .cshtml extension completely from an ASP.NET Web Application? In essence I want to hijack the .cshtml extension and provide my own implementation based on a RazorEngine host, although when I try to access the page.cshtml directly it appears to be running under an existing WebPages razor host that I'm trying to disable. Note: it looks like its executing .cshtml pages under the System.Web.WebPages.Razor context as the Microsoft.Data Database is initialized. I don't even have any Mvc or WebPages dlls referenced, just System.Web.dll and a local copy of System.Web.Razor with RazorEngine.dll I've created a new ASP.NET Web .NET 4.0 Application and have tried to clear all buildProviders and handlers as seen below: <system.web> <httpModules> <clear/> </httpModules> <compilation debug="true" targetFramework="4.0"> <buildProviders> <clear/> </buildProviders> </compilation> <httpHandlers> <clear/> <add path="*" type="MyHandler" verb="*"/> </httpHandlers> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <clear/> </modules> <handlers> <clear/> <add path="*" name="MyHandler" type="MyHandler" verb="*" preCondition="integratedMode" resourceType="Unspecified" allowPathInfo="true" /> </handlers> </system.webServer> Although even with this, when I visit any page.cshtml page it still bypasses My wildcard handler and tries to execute the page itself. Basically I want to remove all traces of .cshtml handlers/buildProviders/preprocessing so I can serve the .cshtml pages myself, anyone know how I can do this?

    Read the article

  • SQL Server database with clustered GUID PKs - switch clustered index or switch to sequential (comb)

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • docs/examples on libxml2 type system?

    - by Wang
    I'm reading through the libxml2 API and examples on xmlsoft.com and having some real difficulty wrapping my head around the type system. For example, the _xmlAttribute struct has an xmlChar* field called name. This obviously refers to the name of the attribute (e.g., "bar" for the struct tied to the element <foo bar='baz' />). The _xmlAttribute struct also has a field called value, which I would expect to be "baz", except the type is xmlNodePtr. Um, what? I guess I could write up some test code and examine the memory in gdb, but there has to be an easier way. Has anyone written up examples of the data structures generated by a libxml2 parser? I'm looking for an English prose text, not generated documentation, that will walk me through the various types and the values they take when parsing some example bit of XML. Bonus points if it presents only the subset of libxml2 needed to read and write a config file---no namespaces or XPath or even modifying an existing document, just parse some XML, access the data, and then write it out again.

    Read the article

  • Enterprise SSO & Identity management / recommendations

    - by Maxim Veksler
    Hello Friends, We've discussed SSO before. I would like to re-enhance the conversation with defined requirements, taking into consideration recent new developments. In the past week I've been doing market research looking for answers to the following key issues: The project should should be: Requirements SSO solution for web applications. Integrates into existing developed products. has Policy based password security (Length, Complexity, Duration and co) Security Policy can be managed using a web interface. Customizable user interface (the password prompt and co. screens). Highly available (99.9%) Scalable. Runs on Red Hat Linux. Nice to have Contains user Groups & Roles. Written in Java. Free Software (open source) solution. None of the solutions came up so far are "killer choice" which leads me to think I will be tooling several projects (OWASP, AcegiSecurity + X??) hence this discussion. We are ISV delivering front-end & backend application suite. The frontend is broken into several modules which should act as autonomous unit, from client point of view he uses the "application" - which leads to this discussion regrading SSO. I would appreciate people sharing their experience & ideas regarding the appropriete solutions. Some solutions are interesting CAS Sun OpenSSO Enterprise JBoss Identity IDM JOSSO Tivoli Access Manager for Enterprise Single Sign-On Or more generally speaking this list Thank you, Maxim.

    Read the article

  • datagridview apply cellstyle to cells

    - by SchlaWiener
    I used this example to create a DateTime column for a DataGridView in my winforms app. http://msdn.microsoft.com/en-us/library/7tas5c80.aspx I can see the new column in the Windows Forms Designer and add it to an existing DataGridView. However, I want to be able to change the display format when I change the "DefaultCellStyle" within the designer. The designer generated code looks like this: DataGridViewCellStyle1.Format = "t" DataGridViewCellStyle1.NullValue = Nothing Me.colDate.DefaultCellStyle = DataGridViewCellStyle1 Me.colDatum.Name = "colDate" Me.colDatum.Resizable = System.Windows.Forms.DataGridViewTriState.[False] Which is fine. But since the code of the DataGridViewCalendarCell does this in the constructor: Public Sub New() Me.Style.Format = "d" End Sub The format never changes to "t" (time format). I didn't find out how to apply the format from the owning column to I use this woraround atm: Public Overrides Function GetInheritedStyle _ (ByVal inheritedCellStyle As _ System.Windows.Forms.DataGridViewCellStyle, _ ByVal rowIndex As Integer, ByVal includeColors As Boolean) _ As System.Windows.Forms.DataGridViewCellStyle If Me.OwningColumn IsNot Nothing Then Me.Style.Format = Me.OwningColumn.DefaultCellStyle.Format End If Return MyBase.GetInheritedStyle(_ inheritedCellStyle, rowIndex, includeColors) End Function However, since this is just an hack I want to know which is the "how it should" be done way to apply the default cellstyle from a DataGridViewColumn to its cells. Any suggestions?

    Read the article

  • Need to replace 3rd party WinForm controls, what's the closet WPF equivalent?

    - by Refracted Paladin
    I am tired of Windows Forms...I just am. I am not trying to start a debate on it I am just bored with it. Unfortunately we have become dependent on 4 controls in DevExpress XtraEditors. I have had nothing but difficulties with them and I want to move on. What I need now is what the closet replacement would be for the 4 controls I am using. Here they are: LookUpEdit - this is a dropdown that filters the dropdown list as you type. MemoExEdit - this is a textbox that 'pops up' a bigger area when it has focus CheckedComboBoxEdit - this is a dropdown of checkboxes. CheckedListBoxControl - this is a nicely columned list box of checkboxes This is a LOB app that has tons of data entry. In reality, the first two are nice but not essential. The second two are essential in that I would either need to replicate the functionality or change the way the users are interacting with that particular data. I am looking for help in replicating these in a WPF environment with existing controls(codeplex etc) or in straight XAML. Any code or direction would be greatly appreciated but mostly I am hoping to avoid any commercial 3rd party WPF and would instead like to focus on building them myself(but I need direction) or using Codeplex

    Read the article

  • Good DB Migrations for CakePHP?

    - by Martin Westin
    Hi, I have been trying a few migration scripts for CakePHP but I ran into problems with all of the in some form or another. Please advice me on a migration option for Cake that you use live and know works. I'd like the following "features": -Support CakePHP 1.2(e.g. CakeDCs migrations will only be an option when 1.3 is stable and my app migrated to the new codebase) -Support for (or at least not halt on) Models with a different database config. -Support Models in sub-folders of app/models -Support Models in plugins -Support tables that do not conform to Cake conventions (I have a few special tables that do not have a single primary key field and need to keep them) -Plays well with automated deployment via Capistrano and Git. I do not need rails-style versioned files a git versioned schema file that is compared live to the existing schema will do. That is: I like the SchemaShell in Cake apart from it not being compatible with most of my requirements above. I have looked at and tested: CakePHP Schema Shell http://book.cakephp.org/view/734/Schema-management-and-migrations CakeDC migrations http://cakedc.com/downloads/view/cakephp_migrations_plugin YAML migrations http://github.com/georgious/cakephp-yaml-migrations-and-fixtures joelmoss migrations http://code.google.com/p/cakephp-migrations

    Read the article

  • Bulk Copy from one server to another

    - by Joseph
    Hi All, I've one situation where I need to copy part of the data from one server to another. The table schema are exactly same. I need to move partial data from the source, which may or may not be available in the destination table. The solution I'm thinking is, use bcp to export data to a text(or .dat) file and then take that file to the destination as both are not accessible at the same time (Different network), then import the data to the destination. There are some conditions I need to satisfy. 1. I need to export only a list of data from the table, not whole. My client is going to give me IDs which needs to be moved from source to destination. I've around 3000 records in the master table, and same in the child tables too. What I expect is, only 300 records to be moved. 2. If the record exists in the destination, the client is going to instruct as whether to ignore or overwrite case to case. 90% of the time, we need to ignore the records without overwriting, but log the records in a log file. Please help me with the best approach. I thought of using BCP with query option to filter the data, but while importing, how do I bypass inserting the existing records? If I need to overwrite, how to do it? Thanks a lot in advance. ~Joseph

    Read the article

  • WCF 3.5 to 3.0 backwards compatibility with callback services

    - by Miral
    I have a set of existing WCF services hosted in a .NET 3.0 app. They're using the WSHttp bindings and no security. I need to connect to these from a .NET 3.5 client. This seems to be working fine for the one-way services, but I also have some callback services (with CallbackContract and SessionMode = Required, using WSDualHttpBinding); these fail to connect with a timeout somewhere in the ReliableSession code. The service side cannot be changed (it's a historic version issue). Can I modify something on the client side to get this working? (I can connect with a .NET 3.0 client just fine, but I'd rather not be forced to try that path.) The open operation did not complete within the allotted timeout of 00:00:09.9410000. The time allotted to this operation may have been a portion of a longer timeout. Server stack trace: at System.ServiceModel.Channels.ReliableRequestor.ThrowTimeoutException() at System.ServiceModel.Channels.ReliableRequestor.Request(TimeSpan timeout) at System.ServiceModel.Channels.ClientReliableSession.Open(TimeSpan timeout) at System.ServiceModel.Channels.ClientReliableDuplexSessionChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.OnOpen(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout)

    Read the article

  • improve my code for collapsing a list of data.frames

    - by romunov
    Dear StackOverFlowers (flowers in short), I have a list of data.frames (walk.sample) that I would like to collapse into a single (giant) data.frame. While collapsing, I would like to mark (adding another column) which rows have came from which element of the list. This is what I've got so far. This is the data.frame that needs to be collapsed/stacked. > walk.sample [[1]] walker x y 1073 3 228.8756 -726.9198 1086 3 226.7393 -722.5561 1081 3 219.8005 -728.3990 1089 3 225.2239 -727.7422 1032 3 233.1753 -731.5526 [[2]] walker x y 1008 3 205.9104 -775.7488 1022 3 208.3638 -723.8616 1072 3 233.8807 -718.0974 1064 3 217.0028 -689.7917 1026 3 234.1824 -723.7423 [[3]] [1] 3 [[4]] walker x y 546 2 629.9041 831.0852 524 2 627.8698 873.3774 578 2 572.3312 838.7587 513 2 633.0598 871.7559 538 2 636.3088 836.6325 1079 3 206.3683 -729.6257 1095 3 239.9884 -748.2637 1005 3 197.2960 -780.4704 1045 3 245.1900 -694.3566 1026 3 234.1824 -723.7423 I have written a function to add a column that denote from which element the rows came followed by appending it to an existing data.frame. collapseToDataFrame <- function(x) { # collapse list to a dataframe with a twist walk.df <- data.frame() for (i in 1:length(x)) { n.rows <- nrow(x[[i]]) if (length(x[[i]])>1) { temp.df <- cbind(x[[i]], rep(i, n.rows)) names(temp.df) <- c("walker", "x", "y", "session") walk.df <- rbind(walk.df, temp.df) } else { cat("Empty list", "\n") } } return(walk.df) } > collapseToDataFrame(walk.sample) Empty list Empty list walker x y session 3 1 -604.5055 -123.18759 1 60 1 -562.0078 -61.24912 1 84 1 -594.4661 -57.20730 1 9 1 -604.2893 -110.09168 1 43 1 -632.2491 -54.52548 1 1028 3 240.3905 -724.67284 1 1040 3 232.5545 -681.61225 1 1073 3 228.8756 -726.91980 1 1091 3 209.0373 -740.96173 1 1036 3 248.7123 -694.47380 1 I'm curious whether this can be done more elegantly, with perhaps do.call() or some other more generic function?

    Read the article

  • Can't ssh to ec2 permission denied (publickey)

    - by Chris Barnes
    I have existing instances running and I can connect to them fine. Even if I start a new instance from one of my saved ami's I can connect to it fine but any new public or community ami (I've tried 2 offical Ubuntu ami's and 1 Fedora quickstart ami) I get permission denied (publickey). The permissions are good on my key file. I've also tried creating a new keyfile. My ec2 firewall rules are good, I've also tried creating a new group. This is the error I'm getting. ssh -v -i ec2-keypair [email protected] OpenSSH_5.2p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/chris/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to ec2-xxx.xxx.xxx.xxx.compute-1.amazonaws.com [xxx.xxx.xxx.xxx] port 22. debug1: Connection established. debug1: identity file ec2-keypair type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'ec2-xxx.xxx.xxx.xxx.compute-1.amazonaws.com' is known and matches the RSA host key. debug1: Found key in /Users/chris/.ssh/known_hosts:13 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: ec2-keypair debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • How do I work around the GCC "error: cast from ‘SourceLocation*’ to ‘int’ loses precision" error when compiling cmockery.c?

    - by Daryl Spitzer
    I need to add unit tests using Cmockery to an existing build environment that uses as hand-crafted Makefile. So I need to figure out how to build cmockery.c (without automake). When I run: g++ -DHAVE_CONFIG_H -DPIC -I ../cmockery-0.1.2 -I /usr/include/malloc -c ../cmockery-0.1.2/cmockery.c -o obj/cmockery.o I get a long list of errors like this: ../cmockery-0.1.2/cmockery.c: In function ‘void initialize_source_location(SourceLocation*)’: ../cmockery-0.1.2/cmockery.c:248: error: cast from ‘SourceLocation*’ to ‘int’ loses precision Here are lines 247:248 of cmockery.c: static void initialize_source_location(SourceLocation * const location) { assert_true(location); assert_true is defined on line 154 of cmockery.h: #define assert_true(c) _assert_true((int)(c), #c, __FILE__, __LINE__) So the problem (as the error states) is GCC doesn't like the cast from ‘SourceLocation*’ to ‘int’. I can build Cmockery using ./configure and make (on Linux, and on Mac OS X if I export CFLAGS=-I/usr/include/malloc first), without any errors. I've tried looking at the command-line that compiles cmockery.c when I run make (after ./configure): gcc -DHAVE_CONFIG_H -I. -I. -I./src -I./src -Isrc/google -I/usr/include/malloc -MT libcmockery_la-cmockery.lo -MD -MP -MF .deps/libcmockery_la-cmockery.Tpo -c src/cmockery.c -fno-common -DPIC -o .libs/libcmockery_la-cmockery.o ...but I don't see any options that might work around this error. In "error: cast from 'void*' to 'int' loses precision", I see I could change (int) in cmockery.h to (intptr_t). And I've confirmed that works. But since I can build Cmockery with ./configure and make, there must be a way to get it to build without modifying the source.

    Read the article

  • Best way for a remote web app to authenticate users in my current web app?

    - by jklp
    So a bit of background, I'm working on an existing web application which has a set of users, who are able to log in via a traditional login screen with a user name and password, etc. Recently we've managed to score a client (who have their own Intranet site), who are wanting to be able to have their users log into their Intranet site, and then have their users click a link on their Intranet which redirects to our application and logs them into it automatically. I've had two suggestions on how to implement this so far: Create a URL which takes 2 parameters (which are "username" and "password") and have the Intranet site pass those parameters to us (our connection is via TLS so it's all encrypted). This would work fine, but it seems a little "hacky", and also means that the logins and passwords have to be the same on both systems (and having to write some kind of web service which can update the passwords for users - which also seems a bit insecure) Provide a token to the Intranet, so when the client clicks on a link on the Intranet, it sends the token to us, along with the user name (and no password) which means they're authenticated. Again, this sounds a bit hacky as isn't that essentially the same as providing everyone with the same password to log in? So to summarise, I'm after the following things: A way for the users who are already authenticated on the Intranet to log into our system without too much messing around, and without using an external system to authenticate, i.e. LDAP / Kerberos Something which isn't too specific to this client, and can easily be implemented by other Intranets to log in

    Read the article

  • Excel VBA to Update SQL Table

    - by user307655
    Hi All, I have a small excel program that is use to upload data to an SQL server. This has been working well for a while. My problem now is that I would like to offer to users a function to update an existing record in SQL. As each row on this table has a unique id columne. There is a column call UID which is the primary key. This is part of the code currently to upload new data: Set Cn = New ADODB.Connection Cn.Open "Driver={SQL Server};Server=" & ServerName & ";Database=" & DatabaseName & _ ";Uid=" & UserID & ";Pwd=" & Password & ";" rs.Open TableName, Cn, adOpenKeyset, adLockOptimistic For RowCounter = StartRow To EndRow rs.AddNew For ColCounter = 1 To NoOfFields rs(ColCounter - 1) = shtSheetToWork.Cells(RowCounter, ColCounter) Next ColCounter Next RowCounter rs.UpdateBatch ' Tidy up rs.Close Set rs = Nothing Cn.Close Set Cn = Nothing Is there anyway i can modify this code to update a particular UID rather than importing new records? Thanks again for your help

    Read the article

  • Parallel WCF calls to multiple servers

    - by gregmac
    I have a WCF service (the same one) running on multiple servers, and I'd like to call all instances in parallel from a single client. I'm using ChannelFactory and the interface (contract) to call the service. Each service has a local <endpoint> client defined in the .config file. What I'm trying to do is build some kind of generic framework to avoid code duplication. For example a synchronous call in a single thread looks something like this: Dim remoteName As String = "endpointName1" Dim svcProxy As ChannelFactory(Of IMyService) = New ChannelFactory(Of IMyService)(remoteName) Try svcProxy.Open() Dim svc As IMyService = svcProxy.CreateChannel() nodeResult = svc.TestRemote("foo") Finally svcProxy.Close() End Try The part I'm having difficulty with is how to specify and actually invoke the actual remote method (eg "TestRemote") without having to duplicate the above code, and all the thread-related stuff that invokes that, for each method. In the end, I'd like to be able to write code along the lines of (consider this psuedo code): Dim results as Dictionary(Of Node, ExpectedReturnType) results = ParallelInvoke(IMyService.SomeMethod, parameter1, parameter2) where ParallelInvoke() will take the method as an argument, as well as the parameters (paramArray or object() .. whatever) and then go run the request on each remote node, block until they all return an answer or timeout, and then return the results into a Dictionary with the key as the node, and the value as whatever value it returned. I can then (depending on the method) pick out the single value I need, or aggregate all the values from each server together, etc. I'm pretty sure I can do this using reflection and InvokeMember(), but that requires passing the method as a string (which can lead to errors like calling a non-existing method that can't be caught at compile time), so I'd like to see if there is a cleaner way to do this. Thanks

    Read the article

  • Github file size limit changed 6/18/13. Can't push now

    - by slindsey3000
    How does this change as of June 18, 2013 affect my existing repository with a file that exceeds that limit? I last pushed 2 months ago with a large file. I have a large file that I have removed locally but I can not push anything now. I get a "remote error" ... remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB I added the file to .gitignore after original push... But it still exists on remote (origin) Removing it locally should get rid of it at origin(Github) right? ... but ... it is not letting me push because there is a file on Github that exceeds the limit... https://github.com/blog/1533-new-file-size-limits These are the commands I issued plus error messages.. git add . git commit -m "delete cron_log.log" git push origin master remote: Error code: 40bef1f6653fd2410fb2ab40242bc879 remote: warning: Error GH413: Large files detected. remote: warning: See http://git.io/iEPt8g for more information. remote: error: File cron_log.log is 141.41 MB; this exceeds GitHub's file size limit of 100 MB remote: error: File cron_log.log is 126.91 MB; this exceeds GitHub's file size limit of 100 MB To https://github.com/slinds(omited_here)/linexxxx(omited_here).git ! [remote rejected] master - master (pre-receive hook declined) error: failed to push some refs to 'https://github.com/slinds(omited_here) I then tried things like git rm cron_log.log git rm --cached cron_log.log Same error.

    Read the article

  • Cloning and renaming form elements with jQuery

    - by Micor
    I am looking for an effective way to either clone/rename or re-create address fields to offer ability to submit multiple addresses on the same page. So with form example like this: <div id="addresses"> <div class="address"> <input type="text" name="address[0].street"> <input type="text" name="address[0].city"> <input type="text" name="address[0].zip"> <input type="text" name="address[0].state"> </div> </div> <a href="" id="add_address">Add address form</a> From what I can understand there are two options to do that: Recreate the form field by field and increment the index which is kind of verbose: var index = $(".address").length; $('<`input`>').attr({ name: 'address[' + index + '].street', type: 'text' }).appendTo(...); $('<`input`>').attr({ name: 'address[' + index + '].city', type: 'text' }).appendTo(...); $('<`input`>').attr({ name: 'address[' + index + '].zip', type: 'text' }).appendTo(...); $('<`input`>').attr({ name: 'address[' + index + '].state', type: 'text' }).appendTo(...); Clone Existing layer and replace the name in the clone: $("div.address").clone().appendTo($("#addresses")); Which one do you recommend using in terms of being more efficient and if its #2 can you please suggest how I would go about search and replacing all occurrences of [0] with [1] ([n]). Thank you.

    Read the article

  • Amazon EC2 RSA key stopped authenticating - Permission denied (publickey)

    - by shedd
    Authenticating to our Ubuntu EC2 instance worked fine until a little while ago. All of a sudden, the key is being rejected. When we create a new instance with the keypair, we're able to connect to the instance perfectly, so it appears to be an issue with the existing instance. Port 22 is open. Any suggestions on what to look at from a configuration standpoint so we can fix this? Any thoughts on how we can get into the box? Here is the SSH debug output. Is there anything obviously amiss? Thanks so much! $ ssh -v -i ~/zzz.pem ubuntu@###.###.###.### OpenSSH_5.2p1, OpenSSL 0.9.8l 5 Nov 2009 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to ###.###.###.### [###.###.###.###] port 22. debug1: Connection established. debug1: identity file zzz.pem type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-6ubuntu2 debug1: match: OpenSSH_5.1p1 Debian-6ubuntu2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '###.###.###.###' is known and matches the RSA host key. debug1: Found key in /zzz/.ssh/known_hosts:18 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering public key: /zzz/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Offering public key: zzz.txt debug1: Authentications that can continue: publickey debug1: Trying private key: zzz.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • MS SQL Database with clustered GUID PKs - switch clustered index or switch to sequential (comb) GUID

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • Changing font size of legend title in Python pylab rose/polar plot

    - by LaurieW
    I'm trying to change the font size of the title of an existing legend on a rose, or 'polar', plot. Most of the code was written by somebody else, who is away. I've added:- ax.legend(title=legend_title) setp(l.get_title(), fontsize=8) to add the title 'legend_title', which is a variable that the user enters a string for in a a different function that uses this code. The second line of this doesn't return an error but doesn't appear to do anything either. The complete code is below. 'Rose' and 'RoseAxes' are modules/functions written by somebody. Does anyone know of a way to change the legend title font size? I've found some examples for normal plots but can't find any for rose/polar plots. from Rose.RoseAxes import RoseAxes from pylab import figure, title, setp, close, clf from PlotGeneration import color_map_xml fig = figure(1) rect = [0.02, 0.1, 0.8, 0.8] ax = RoseAxes(fig, rect, axisbg='w') fig.add_axes(ax) if cmap == None: (XMLcmap,colors) = color_map_xml.get_cmap('D:/HRW/VET/HrwPyLibs/ColorMapLibrary/paired.xml',255) else: XMLcmap = cmap bqs = kwargs.pop('CTfigname', None) ax.box(Dir, U, bins = rose_binX, units = unit, nsector = nsector, cmap = XMLcmap, lw = 0, **kwargs ) l = ax.legend() ax.legend(title=legend_title) setp(l.get_texts(), fontsize=8) setp(l.get_title(), fontsize=8) Thanks for any help

    Read the article

  • Possible to use multiple ServiceReference.ClientConfig files?

    - by Maciek
    I'm building a modular Silverlight application, using Prism. In the Shell project, I'm loading 2x modules, each hailing from a separate assembly. For convenience let's call them ModuleA and ModuleB ModuleA makes calls to WebServiceA. A ServiceReference.ClientConfig file is present in ModuleA's assembly. In order for this to work, in the shell project, I've added an "existing item" (path set to the forementioned config file in ModuleA's folder) as a link. The shell launched and the ModuleA made a successful call to the WebServiceA. Currently, I'm working on ModuleB, it also needs to make webservice calls, this time to WebServiceB. The service reference has been added, and a ServiceReference.ClientConfig file has appeared in ModuleB's assembly. I've attempted to add that config file as a link to the shell project as well, but I've failed. Q1 : Is it possible to use multiple ServiceReference.ClientConfig files like this? Q2 : Are there any best-practices for this case? Q3 : Is it possible to rename a *.config or *.ClientConfig file ? How is it done? How do I tell the application what file to use? Edit : Formatting

    Read the article

  • Update multiple rows with known keys without inserting new rows if nonexistent keys are found

    - by Kirzilla
    Hello, Let's imagine that we have table items... table: items item_id INT PRIMARY AUTO_INCREMENT title VARCHAR(255) views INT Let's imagine that it is filled with something like (1, item-1, 10), (2, item-2, 10), (3, item-3, 15) I want to make multi update view for this items from data taken from this array [item_id] = [views] '1' => '50', '2' => '60', '3' => '70', '5' => '10' IMPORTANT! Please note that we have item_id=5 in array, but we don't have item_id=5 in database. I can use INSERT ... ON DUPLICATE KEY UPDATE, but this way image_id=5 will be inserted into talbe items. How to avoid inserting new key? I just want item_id=5 be skipped because it is not in table. Of course, before execution I can select existing keys from items table; then compare with keys in array; delete nonexistent keys and perform INSERT ... ON DUPLICATE KEY UPDATE. But maybe there is some more elegant solutions? Thank you.

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >