Search Results

Search found 19606 results on 785 pages for 'the thing'.

Page 76/785 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • arp problems with transparent bridge on linux

    - by Mink
    I've been trying to secure my virtual machines on my esx server by putting them behind a transparent bridge with 2 interfaces, one in front, one at the back. My intention is to put all the firewall rules in one place (instead of on each virtual server). I've been using as bridge a blank new virtual machine based on arch linux (but I suspect it doesn't matter which brand of linux it is). What I have is 2 virtual switchs (thus two Virtual Network, VN_front and VN_back), each with 2 types of ports (switched/separated or promiscious/where the machine can see all packets). On my bridge machine, I've set up 2 virtual NIC, one on VN_front, one on VN_back, both in promisc mode. I've created a bridge br0 with both NIC in it: brctl addbr br0 brctl stp br0 off brctl addif br0 front_if brctl addif br0 back_if Then brought them up: ifconfig front_if 0.0.0.0 promisc ifconfig back_if 0.0.0.0 promisc ifconfig br0 0.0.0.0 (I use promisc mode, because I'm not sure I can do without, thinking that maybe the packets don't reach the NICs) Then I took one of my virtual server sitting on VN_front, and plugged it to VN_back instead (that's the nifty use case I'm thinking about, being able to move my servers around just by changing the VN they are plugged into, without changing anything in the configuration). Then I looked into the macs "seen" by my addressless bridge using brctl showmacs br0 and it did show my server from both sides: I get something that looks like this : port no mac addr is local? ageing timer 2 00:0c:29:e1:54:75 no 9.27 1 00:0c:29:fd:86:0c no 9.27 2 00:50:56:90:05:86 no 73.38 1 00:50:56:90:05:88 no 0.10 2 00:50:56:90:05:8b yes 0.00 << FRONT VN 1 00:50:56:90:05:8c yes 0.00 << BACK VN 2 00:50:56:90:19:18 no 13.55 2 00:50:56:90:3c:cf no 13.57 the thing is that the server that are plugged in front/back are not shown on the correct port. I suspect some horrible thing happening in the ARP-world... :-/ If I ping from a front virtual server to a back virtual server, I can only see the back machine if that back machine pings something in the front. As soon as I stop the ping from the back machine, the ping from the front machine stops getting through... I've noticed that if the back machine pings, then its port on the bridge is the correct one... I've tried to play with the arp_ switch of /proc/sys, but with no clear effect on the end result... /proc/sys/net/ipv4/ip_forward doesn't seem to be of any use when using a bridge (seems it's all taken care of by brctl) /proc/sys/net/ipv4/conf//arp_ don't seem to change much either... (tried arp_announce to 2 or 8 - like suggested elsewhere - and arp_ignore to 0 or 1 ) All the examples I've seen have a different subnet on either side like 10.0.1.0/24 and 10.0.2.0/24... In my case I want 10.0.1.0/24 on both side (just like a transparent switch - except it's a hidden fw ). Turning stp on/off doesn't seem to have any impact on my issue. It's as if the arp packets where getting through the bridge, corrupting the other side with false data... I've tried to use the -arp on each interface, br0, front, back... it breaks the thing altogether... I suspect it has something to do with both side being on the same subnet... I've thought about putting all my machine behind the fw, so as to have all the same subnet at the back... but I'm stuck with my provider's gateway standing at the front with part of my subnet (in fact 3 appliance to route the whole subnet), so I'll always have ips from the same subnet on both side, whatever I do... (I'm using fixed front IPs on my delegated subnet). I'm at a loss... -_-'' Thx for your help. (As anyone tried something like this? from within ESXi?) (It's not just a stunt, the idea is to have something like fail2ban running on some servers, sending their banned IP to the bridge/fw so that it too could ban them - saving all the other servers from that same attacker in one go, allowing for some honeypot that would trigger the fw from any kind of suitable response, and stuffs of the sort... I am aware I could use something like snort, but it addresses some completely different kind of problems, in a completely different way... )

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • When someone deletes a shared data source in SSRS

    - by Rob Farley
    SQL Server Reporting Services plays nicely. You can have things in the catalogue that get shared. You can have Reports that have Links, Datasets that can be used across different reports, and Data Sources that can be used in a variety of ways too. So if you find that someone has deleted a shared data source, you potentially have a bit of a horror story going on. And this works for this month’s T-SQL Tuesday theme, hosted by Nick Haslam, who wants to hear about horror stories. I don’t write about LobsterPot client horror stories, so I’m writing about a situation that a fellow MVP friend asked me about recently instead. The best thing to do is to grab a recent backup of the ReportServer database, restore it somewhere, and figure out what’s changed. But of course, this isn’t always possible. And it’s much nicer to help someone with this kind of thing, rather than to be trying to fix it yourself when you’ve just deleted the wrong data source. Unfortunately, it lets you delete data sources, without trying to scream that the data source is shared across over 400 reports in over 100 folders, as was the case for my friend’s colleague. So, suddenly there’s a big problem – lots of reports are failing, and the time to turn it around is small. You probably know which data source has been deleted, but getting the shared data source back isn’t the hard part (that’s just a connection string really). The nasty bit is all the re-mapping, to get those 400 reports working again. I know from exploring this kind of stuff in the past that the ReportServer database (using its default name) has a table called dbo.Catalog to represent the catalogue, and that Reports are stored here. However, the information about what data sources these deployed reports are configured to use is stored in a different table, dbo.DataSource. You could be forgiven for thinking that shared data sources would live in this table, but they don’t – they’re catalogue items just like the reports. Let’s have a look at the structure of these two tables (although if you’re reading this because you have a disaster, feel free to skim past). Frustratingly, there doesn’t seem to be a Books Online page for this information, sorry about that. I’m also not going to look at all the columns, just ones that I find interesting enough to mention, and that are related to the problem at hand. These fields are consistent all the way through to SQL Server 2012 – there doesn’t seem to have been any changes here for quite a while. dbo.Catalog The Primary Key is ItemID. It’s a uniqueidentifier. I’m not going to comment any more on that. A minor nice point about using GUIDs in unfamiliar databases is that you can more easily figure out what’s what. But foreign keys are for that too… Path, Name and ParentID tell you where in the folder structure the item lives. Path isn’t actually required – you could’ve done recursive queries to get there. But as that would be quite painful, I’m more than happy for the Path column to be there. Path contains the Name as well, incidentally. Type tells you what kind of item it is. Some examples are 1 for a folder and 2 a report. 4 is linked reports, 5 is a data source, 6 is a report model. I forget the others for now (but feel free to put a comment giving the full list if you know it). Content is an image field, remembering that image doesn’t necessarily store images – these days we’d rather use varbinary(max), but even in SQL Server 2012, this field is still image. It stores the actual item definition in binary form, whether it’s actually an image, a report, whatever. LinkSourceID is used for Linked Reports, and has a self-referencing foreign key (allowing NULL, of course) back to ItemID. Parameter is an ntext field containing XML for the parameters of the report. Not sure why this couldn’t be a separate table, but I guess that’s just the way it goes. This field gets changed when the default parameters get changed in Report Manager. There is nothing in dbo.Catalog that describes the actual data sources that the report uses. The default data sources would be part of the Content field, as they are defined in the RDL, but when you deploy reports, you typically choose to NOT replace the data sources. Anyway, they’re not in this table. Maybe it was already considered a bit wide to throw in another ntext field, I’m not sure. They’re in dbo.DataSource instead. dbo.DataSource The Primary key is DSID. Yes it’s a uniqueidentifier... ItemID is a foreign key reference back to dbo.Catalog Fields such as ConnectionString, Prompt, UserName and Password do what they say on the tin, storing information about how to connect to the particular source in question. Link is a uniqueidentifier, which refers back to dbo.Catalog. This is used when a data source within a report refers back to a shared data source, rather than embedding the connection information itself. You’d think this should be enforced by foreign key, but it’s not. It does allow NULLs though. Flags this is an int, and I’ll come back to this. When a Data Source gets deleted out of dbo.Catalog, you might assume that it would be disallowed if there are references to it from dbo.DataSource. Well, you’d be wrong. And not because of the lack of a foreign key either. Deleting anything from the catalogue is done by calling a stored procedure called dbo.DeleteObject. You can look at the definition in there – it feels very much like the kind of Delete stored procedures that many people write, the kind of thing that means they don’t need to worry about allowing cascading deletes with foreign keys – because the stored procedure does the lot. Except that it doesn’t quite do that. If it deleted everything on a cascading delete, we’d’ve lost all the data sources as configured in dbo.DataSource, and that would be bad. This is fine if the ItemID from dbo.DataSource hooks in – if the report is being deleted. But if a shared data source is being deleted, you don’t want to lose the existence of the data source from the report. So it sets it to NULL, and it marks it as invalid. We see this code in that stored procedure. UPDATE [DataSource]    SET       [Flags] = [Flags] & 0x7FFFFFFD, -- broken link       [Link] = NULL FROM    [Catalog] AS C    INNER JOIN [DataSource] AS DS ON C.[ItemID] = DS.[Link] WHERE    (C.Path = @Path OR C.Path LIKE @Prefix ESCAPE '*') Unfortunately there’s no semi-colon on the end (but I’d rather they fix the ntext and image types first), and don’t get me started about using the table name in the UPDATE clause (it should use the alias DS). But there is a nice comment about what’s going on with the Flags field. What I’d LIKE it to do would be to set the connection information to a report-embedded copy of the connection information that’s in the shared data source, the one that’s about to be deleted. I understand that this would cause someone to lose the benefit of having the data sources configured in a central point, but I’d say that’s probably still slightly better than LOSING THE INFORMATION COMPLETELY. Sorry, rant over. I should log a Connect item – I’ll put that on my todo list. So it sets the Link field to NULL, and marks the Flags to tell you they’re broken. So this is your clue to fixing it. A bitwise AND with 0x7FFFFFFD is basically stripping out the ‘2’ bit from a number. So numbers like 2, 3, 6, 7, 10, 11, etc, whose binary representation ends in either 11 or 10 get turned into 0, 1, 4, 5, 8, 9, etc. We can test for it using a WHERE clause that matches the SET clause we’ve just used. I’d also recommend checking for Link being NULL and also having no ConnectionString. And join back to dbo.Catalog to get the path (including the name) of broken reports are – in case you get a surprise from a different data source being broken in the past. SELECT c.Path, ds.Name FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; When I just ran this on my own machine, having deleted a data source to check my code, I noticed a Report Model in the list as well – so if you had thought it was just going to be reports that were broken, you’d be forgetting something. So to fix those reports, get your new data source created in the catalogue, and then find its ItemID by querying Catalog, using Path and Name to find it. And then use this value to fix them up. To fix the Flags field, just add 2. I prefer to use bitwise OR which should do the same. Use the OUTPUT clause to get a copy of the DSIDs of the ones you’re changing, just in case you need to revert something later after testing (doing it all in a transaction won’t help, because you’ll just lock out the table, stopping you from testing anything). UPDATE ds SET [Flags] = [Flags] | 2, [Link] = '3AE31CBA-BDB4-4FD1-94F4-580B7FAB939D' /*Insert your own GUID*/ OUTPUT deleted.Name, deleted.DSID, deleted.ItemID, deleted.Flags FROM dbo.[DataSource] AS ds JOIN dbo.[Catalog] AS c ON c.ItemID = ds.ItemID WHERE ds.[Flags] = ds.[Flags] & 0x7FFFFFFD AND ds.[Link] IS NULL AND ds.[ConnectionString] IS NULL; But please be careful. Your mileage may vary. And there’s no reason why 400-odd broken reports needs to be quite the nightmare that it could be. Really, it should be less than five minutes. @rob_farley

    Read the article

  • Parallax backgrounds in OpenGL ES on the iPhone

    - by Scott
    I've got basically a 2d game on the iPhone and I'm trying to set up multiple backgrounds that scroll at different speeds (known as parallax backgrounds). So my thought was to just stick the backgrounds BEHIND the foreground using different z-coordinate planes, and just make them bigger than the foreground (in size) to accommodate, so that the whole thing can be scrolled (just at a different speed). And (as far as I know) I basically implemented that. The only problem is that it seems to entirely ignore whatever z-value I give it, or rather it just zeroes all of them. I see the background (I've only tested ONE background so far, to keep it simple...so for now I just have a foreground and I want one background scrolling at a different speed), but it scrolls 1:1 with my foreground, so it obviously doesn't look right, and most of it is cut off (cause it's bigger). And I've tried various z-values for the background and various near/far clipping planes...it's always the same. I'm probably just doing one simple thing wrong, but I can't figure it out. I'm wondering if it has to do with me using only 2 coordinates in glVertexPointer for the foreground? (Of course for the background I AM passing in 3) I'll post some code: This is some initial setup: glMatrixMode(GL_PROJECTION); glLoadIdentity(); glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 10.0f); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnableClientState(GL_VERTEX_ARRAY); //glEnableClientState(GL_COLOR_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); //transparency glEnable (GL_BLEND); glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA); A little bit about my foreground's float array....it's interleaved. For my foreground it goes vertex x, vertex y, texture x, texture y, repeat. This all works just fine. This is my FOREGROUND rendering: glVertexPointer(2, GL_FLOAT, 4*sizeof(GLfloat), texes); <br> glTexCoordPointer(2, GL_FLOAT, 4*sizeof(GLfloat), (GLvoid*)texes + 2*sizeof(GLfloat)); <br> glDrawArrays(GL_TRIANGLES, 0, indexCount / 4); BACKGROUND rendering: Same drill here except this time it goes vertex x, vertex y, vertex z, texture x, texture y, repeat. Note the z value this time. I did make sure the data in this array was correct while debugging (getting the right z values). And again, it shows up...it's just not going far back in the distance like it should. glVertexPointer(3, GL_FLOAT, 5*sizeof(GLfloat), b1Texes); glTexCoordPointer(2, GL_FLOAT, 5*sizeof(GLfloat), (GLvoid*)b1Texes + 3*sizeof(GLfloat)); glDrawArrays(GL_TRIANGLES, 0, b1IndexCount / 5); And to move my camera, I just do a simple glTranslatef(x, y, 0.0f); I'm not understanding what I'm doing wrong cause this seems like the most basic 3D function imaginable...things further away are smaller and don't move as fast when the camera moves. Not the case for me. Seems like it should be pretty basic and not even really be affected by my projection and all that (though I've even tried doing glFrustum just for fun, no success). Please help, I feel like it's just one dumb thing. I will post more code if necessary.

    Read the article

  • Paperclip: "missing" image when uses has_one

    - by EricR
    I'm working on a website that allows people who run bed and breakfast businesses to post their accommodations. I would like to require that they include a "profile image" of the accommodation when they post it, but I also want to give them the option to add more images later (this will be developed after). I thought the best thing to do would be to use the Paperclip gem and have a Accommodation and a Photo in my application, the later belonging to the first as an association. A new Photo record is created when they create an Accommodation. It has both id and accommodation_id attributes. However, the image is never uploaded and none of the Paperclip attributes get set (image_file_name: nil, image_content_type: nil, image_file_size: nil), so I get Paperclip's "missing" photo. Any ideas on this one? It's been keeping me stuck for a few days now. Accommodation models/accommodation.rb class Accommodation < ActiveRecord::Base validates_presence_of :title, :description, :photo, :thing, :location attr_accessible :title, :description, :thing, :borough, :location, :spaces, :price has_one :photo end controllers/accommodation_controller.erb class AccommodationsController < ApplicationController before_filter :login_required, :only => {:new, :edit} uses_tiny_mce ( :options => { :theme => 'advanced', :theme_advanced_toolbar_location => 'top', :theme_advanced_toolbar_align => 'left', :theme_advanced_buttons1 => 'bold,italic,underline,bullist,numlist,separator,undo,redo', :theme_advanced_buttons2 => '', :theme_advanced_buttons3 => '' }) def index @accommodations = Accommodation.all end def show @accommodation = Accommodation.find(params[:id]) end def new @accommodation = Accommodation.new end def create @accommodation = Accommodation.new(params[:accommodation]) @accommodation.photo = Photo.new(params[:photo]) @accommodation.user_id = current_user.id if @accommodation.save flash[:notice] = "Successfully created your accommodation." render :action => 'show' else render :action => 'new' end end def edit @accommodation = Accommodation.find(params[:id]) end def update @accommodation = Accommodation.find(params[:id]) if @accommodation.update_attributes(params[:accommodation]) flash[:notice] = "Successfully updated accommodation." render :action => 'show' else render :action => 'edit' end end def destroy @accommodation = Accommodation.find(params[:id]) @accommodation.destroy flash[:notice] = "Successfully destroyed accommodation." redirect_to :inkeep end private def check_owner end end views/accommodations/_form.html.erb <%= form_for @accommodation, :html => {:multipart => true} do |f| %> <%= f.error_messages %> <p> Title<br /> <%= f.text_field :title, :size => 60 %> </p> <p> Description<br /> <%= f.text_area :description, :rows => 17, :cols => 75, :class => "mceEditor" %> </p> <p> Photo<br /> <%= f.file_field :photo %> </p> [... snip ...] <p><%= f.submit %></p> <% end %> Photo The controller and views are still the same as when Rails generated them. models/photo.erb class Photo < ActiveRecord::Base attr_accessible :image_file_name, :image_content_type, :image_file_size belongs_to :accommodation has_attached_file :image, :styles => { :thumb=> "100x100#", :small => "150x150>" } end

    Read the article

  • Separation of presentation and business logic in PHP

    - by Markus Ossi
    I am programming my first real PHP website and am wondering how to make my code more readable to myself. The reference book I am using is PHP and MySQL Web Development 4th ed. The aforementioned book gives three approaches to separating logic and content: include files function or class API template system I haven't chosen any of these yet, as wrapping my brains around these concepts is taking some time. However, my code has become some hybrid of the first two as I am just copy-pasting away here and modifying as I go. On presentation side, all of my pages have these common elements: header, top navigation, sidebar navigation, content, right sidebar and footer. The function-based examples in the book suggest that I could have these display functions that handle all the presentation example. So, my page code will be like this: display_header(); display_navigation(); display_content(); display_footer(); However, I don't like this because the examples in the book have these print statements with HTML and PHP mixed up like this: echo "<tr bgcolor=\"".$color."\"><td><a href=\"".$url."\">" ... I would rather like to have HTML with some PHP in the middle, not the other way round. I am thinking of making my pages so that at the beginning of my page, I will fetch all the data from database and put it in arrays. I will also get the data for variables. If there are any errors in any of these processes, I will put them into error strings. Then, at the HTML code, I will loop through these arrays using foreach and display the content. In some cases, there will be some variables that will be shown. If there is an error variable that is set, I will display that at the proper position. (As a side note: The thing I do not understand is that in most example code, if some database query or whatnot gives an error, there is always: else echo 'Error'; This baffles me, because when the example code gives an error, it is sometimes echoed out even before the HTML has started...) For people who have used ASP.NET, I have gotten somewhat used to the code-behind files and lblError and I am trying to do something similar here. The thing I haven't figured out is how could I do this "do logic first, then presentation" thing so that I would not have to replicate for example the navigation logic and navigation presentation in all of the pages. Should I do some include files or could I use functions here but a little bit differently? Are there any good articles where these "styles" of separating presentation and logic are explained a little bit more thoroughly. The book I have only has one paragraph about this stuff. What I am thinking is that I am talking about some concepts or ways of doing PHP programming here, but I just don't know the terms for them yet. I know this isn't a straight forward question, I just need some help in organizing my thoughts.

    Read the article

  • fatal error C1084: Cannot read type library file: 'Smegui.tlb': Error loading type library/DLL.

    - by Steiny
    Hi, I am trying to build an old version of an application which consists of VC++ projects that were written in Visual Studio 2003. My OS is Windows 7 Enterprise (64-bit). When I try and build the solution I get the following errors: error C4772: #import referenced a type from a missing type library; '__missing_type__' used as a placeholder fatal error C1084: Cannot read type library file: 'Smegui.tlb': Error loading type library/DLL. They both complain about the following import statement: #import "Smegui.tlb" no_implementation This is not a case of the file path being incorrect as renaming the Smegui.tlb file causes the compiler to throw another error saying it cannot find the library. Smegui is from another application that this one depends on. I thought perhaps I was missing a dll but there is no such thing as Smegui.dll. All I know about .tlb files is that they are a type library and you can create them from an assembly using tlbexp.exe or regasm.exe (the later also registers the assembly with COM) There is also an Apache Ant build script which uses a custom task to invoke devenv.com to build the projects. This is the same script that the build server originally used to build the application. It gives me the same errors when I try and run it. The strangest thing about this is that I knew it ought to work seeing as it is all freshly checked out from subversion. I tried many different combinations of admin vs user elevation, VS vs Ant build, cleaning, release. I have got it to build successfully about 5 times but the build seems to be non-deterministic. If anyone can shed some light on how this tlb stuff even works or what this error might mean I would greatly appreciate it. Cheers, Steiny

    Read the article

  • WPF: Draw a grid on a Canvas?

    - by stefan.at.wpf
    Hello community, I'm currently trying to add gridlines to a canvas. I need an exact free space between them, where I'd like to place something for hit detection per cell, maybe simply a transparent border or such a thing. While I thought this would be an easy thing, I'm facing problems like antialiasing and that lines in WPF aren't very "calculate" / exact drawing friendly - e.g. if I draw a line on x=20 with a thickness of 10, the line's width goes from x=15 to x=25 (maybe not exactly, just some kind like that), so it takes the given position as middle point - if it would draw from 20 to 30 it would be easier in my case. Besides that making things more complex, how does WPF handle e.g. a thickness of 5? Draw thickness 3 left from the given point and the remaining 2 right from it? Or maybe just the opposite way? Well, just wanted to show you which problems I have, though this all maybe just seems simple to be done. Just wondering if anyone has ever done this before. Currently I find a Border without content and just 2 sites set to a thickness greater than 0 as line seems to work the best in my tests, seems like it's clear where they are drawn and they somehow don't seem to make any antialiasing problems. Just wondering if there's a more intuitive / better way of doing this? I don't want to lay a Canvas over a Grid, I think this maybe makes some things more complex in the end (by the way: how would I place a Canvas on top of a Grid?). Thanks for any hint!

    Read the article

  • Problems with repositories on CentOS 3.9

    - by rodnower
    Hello, I have CentOS 3.9 for i386. When I try to instal some thing with yum, i.e: yum install firefox or yum install firefox* or yum list firefox and so on, I get: +++++++++++++++++++ yum info firefox Gathering header information file(s) from server(s) Server: CentOS-3 - Addons Server: CentOS-3 - Base Server: CentOS-3 - Extras Server: CentOS-3 - Updates Server: Jason's Utter Ramblings Repo Finding updated packages Downloading needed headers Looking in Available Packages: Looking in Installed Packages: +++++++++++++++++++ Some time ago I had CentOS 5, and I had the similar problem (exept of firefox all other packages were not installed) and I spent very much time to find different repositories and so on. Now I have CentOS 3, and there is nothing I can install with yum. This is yum.conf content: +++++++++++++++++++ [main] cachedir=/var/cache/yum debuglevel=2 logfile=/var/log/yum.log pkgpolicy=newest distroverpkg=redhat-release installonlypkgs=kernel kernel-smp kernel-hugemem kernel-enterprise kernel-debug kernel-unsupported kernel-smp-unsupported kernel-hugemem-unsupported tolerant=1 exactarch=1 [utterramblings] name=Jason's Utter Ramblings Repo baseurl=http://www.jasonlitka.com/media/EL4/i386/ [base] name=CentOS-$releasever - Base baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ #released updates [update] name=CentOS-$releasever - Updates baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ #packages used/produced in the build but not released [addons] name=CentOS-$releasever - Addons baseurl=http://mirror.centos.org/centos/$releasever/addons/$basearch/ #additional packages that may be useful [extras] name=CentOS-$releasever - Extras baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ #[centosplus] #name=CentOS-$releasever - Plus #baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/ #[testing] #name=CentOS-$releasever - Testing #baseurl=http://mirror.centos.org/centos/$releasever/testing/$basearch/ #[fasttrack] #name=CentOS-$releasever - Fasttrack #baseurl=http://mirror.centos.org/centos/$releasever/fasttrack/$basearch/ +++++++++++++++++++ The file is too long, so I littely edited it. So my question is: is there some "normal" one repository that have all basic thing like firefox and so that I will insert to this file and all will work fine? Thank you very much for ahead.

    Read the article

  • How to bulk insert a CSV file into SQLite C#

    - by Lirik
    I have seen similar questions (1, 2), but none of them discuss how to insert CSV files into SQLite. About the only thing I could think of doing is to use a CSVDataAdapter and fill the SQLiteDataSet, then use the SQLiteDataSet to update the tables in the database: The only DataAdapter for CSV files I found is not actually available: CSVDataAdapter CSVda = new CSVDataAdapter(@"c:\MyFile.csv"); CSVda.HasHeaderRow = true; DataSet ds = new DataSet(); // <-- Use an SQLiteDataSet instead CSVda.Fill(ds); To write to a CSV file: CSVDataAdapter CSVda = new CSVDataAdapter(@"c:\MyFile.csv"); bool InclHeader = true; CSVda.Update(MyDataSet,"MyTable",InclHeader); I found the above code @ http://devintelligence.com/2005/02/dataadapter-for-csv-files/ The CSVDataAdapter was supposed to come with OpenNetCF's SDF, but it doesn't seem to be available anymore. Does anybody know where I can get a CSVDataAdapter? Perhaps somebody knows the much simpler thing: how to do bulk inserts of CSV files into SQLite... your help would be greatly appreciated!

    Read the article

  • SSRS 2008 and SSAS 2008 transport error

    - by dan english
    I am testing an upgrade to SSAS 2008 and verifying existing reports working properly. I am able to get some SSRS reports that are using SSAS as a datasource to run without any issues. They are simple and only have a single dataset. The reports that I am unable to get to work correctly against SSAS 2008 have multiple datasets and have a fitler setup with a data range setup as a parameter. As soon as I setup that filter as a parameter and deploy them the report returns a "The connection either timed out or was lost. Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host. An existing connection was forcibly closed by the remote host" message. The funny thing is that the report works fine when I run it locally in BIDS and it works fine once deployed if I point it to a SSAS 2005 server. Once I point it to the SSAS 2008 server it fails. I can get other reports to work fine, but not the ones with this type of a filter setup. I can see that the start and end date parameter MDX statements get run in the trace, but that is it. After those run then we receive the transport connection message. Another funny thing is that in the production environment the reports are working fine, but that has SSRS 2005 and SSAS 2008. Does this make sense? What could be causing this? I have tried setting the single transaction level on the datasource too, but that does not seem to make a difference.

    Read the article

  • LINQ to Entities exceptions (ElementAtOrDefault and CompareObjectEqual)

    - by OffApps Cory
    I am working on a shipping platform which will eventually automate shipping through several major carriers. I have a ShipmentsView Usercontrol which displayes a list of Shipments (returned by EntityFramework), and when a user clicks on a shipment item, it spawns a ShipmentEditView and passes the ShipmentID (RecordKey) to that view. I initially wrestled with trying to get the context from the parent (ShipmentsView) and finally gave up resolving to get to it later. I wanted to do this to keep a single instance of the context. anyhow, I now create a new instance of the context in my ShipmentEditViewModel, and query against it for the Shipment record. I know I could just pass the record, but I wanted to use the Ocean Framework that Karl Shifflett wrote and don't want to muck about writing new transition methods. So anyhow, I query and when stepping through, I can see that it returns a record, as soon as execution reached the point where it assigned the query result to the e.Result property, it throws up the following exception depending on the query I used. LINQToEntities Dim RecordID As Decimal = CDec(e.Argument) Dim myResult = From ship In _Context.Shipment _ Where ship.ShipID = e.Argument _ Select ship Select Case myResult.Count Case 0 e.Result = New Shipment Case 1 e.Result = myResult(0) Case Else e.Result = Nothing End Select "LINQ to Entities does not recognize the method 'System.Object.CompareObjectEqual(System.Object, System.Object, Boolean)' method, and this method cannot be translated into a store expression. LINQToEntities via Method calls Dim RecordID As Decimal = CDec(e.Argument) Dim myResult = _Context.Shipment.Where(Function(s) s.ShipID = RecordID) Select Case myResult.Count Case 0 e.Result = New Shipment Case 1 e.Result = myResult(0) Case Else e.Result = Nothing End Select LINQ to Entities does not recognize the method 'SnazzyShippingDAL.Shipment ElementAtOrDefault[Shipment] (System.Linq.IQueryable`1[SnazzyShippingDAL.Shipment], Int32)' method, and this method cannot be translated into a store expression. I have been trying to get this thing to display a record for like three days. i am seriously thinking about going back and re=-engineering it without the MVVM pattern (which I realize I am only starting to learn and understand) if only to make the &$^%ed thing work. Any help will be muchly appreciated. Cory

    Read the article

  • Using LINQ to SQL in ASP.NET MVC2 project

    - by mazhar
    Well I am new to this ORM stuff. We have to create a large project. I read about LINQ to SQL. will it be appropriate to use it in the project of high risk. i found no problem with it personally but the thing is that there will be no going back once started.So i need some feedback from the ORM gurus here at the MSDN. Will entity framework will be better? (I am in doubt about LINK to SQL because I have read and heard negative feedback here and there) I will be using MVC2 as the framework. So please give the feedback about LINQ to SQL in this regard. Q2) Also I am a fan of stored procedure as they are precomputed and fasten up the thing and I have never worked without them.I know that LINQ to SQL support stored procedures but will it be feasible to give up stored procedure seeing the beautiful data access layer generated with little effort as we are also in a need of rapid development. Q3) If some changes to some fields required in the database in LINK to SQL how will the changes be accommodated in the data access layer.

    Read the article

  • How to bulk insert a CSV file into SQLite

    - by Lirik
    I have seen similar questions (1, 2), but none of them discuss how to insert CSV files into SQLite. About the only thing I could think of doing is to use a CSVDataAdapter and fill the SQLiteDataSet, then use the SQLiteDataSet to update the tables in the database: The only DataAdapter for CSV files I found is not actually available: CSVDataAdapter CSVda = new CSVDataAdapter(@"c:\MyFile.csv"); CSVda.HasHeaderRow = true; DataSet ds = new DataSet(); // <-- Use an SQLiteDataSet instead CSVda.Fill(ds); To write to a CSV file: CSVDataAdapter CSVda = new CSVDataAdapter(@"c:\MyFile.csv"); bool InclHeader = true; CSVda.Update(MyDataSet,"MyTable",InclHeader); I found the above code @ http://devintelligence.com/2005/02/dataadapter-for-csv-files/ The CSVDataAdapter was supposed to come with OpenNetCF's SDF, but it doesn't seem to be available anymore. Does anybody know where I can get a CSVDataAdapter? Perhaps somebody knows the much simpler thing: how to do bulk inserts of CSV files into SQLite... your help would be greatly appreciated!

    Read the article

  • Creating a Custom EventAggregator Class

    - by Phil
    One thing I noticed about Microsoft's Composite Application Guidance is that the EventAggregator class is a little inflexible. I say that because getting a particular event from the EventAggregator involves identifying the event by its type like so: _eventAggregator.GetEvent<MyEventType>(); But what if you want different events of the same type? For example, if a developer wants to add a new event to his application of type CompositePresentationEvent<int>, he would have to create a new class that derives from CompositePresentationEvent<int> in a shared library somewhere just to keep it separate from any other events of the same type. In a large application, that's a lot of little two-line classes like the following: public class StuffHappenedEvent : CompositePresentationEvent<int> {} public class OtherStuffHappenedEvent : CompositePresentationEvent<int> {} I don't really like that approach. It almost feels dirty to me, partially because I don't want a million two-line event classes sitting around in my infrastructure dll. What if I designed my own simple event aggregator that identified events by an event ID rather than the event type? For example, I could have an enum such as the following: public enum EventId { StuffHappened, OtherStuffHappened, YetMoreStuffHappened } And my new event aggregator class could use the EventId enum (or a more general object) as a key to identify events in the following way: _eventAggregator.GetEvent<CompositePresentationEvent<int>>(EventId.StuffHappened); _eventAggregator.GetEvent<CompositePresentationEvent<int>>(EventId.OtherStuffHappened); Is this good design for the long run? One thing I noticed is that this reduces type safety. In a large application, is this really as important of a concern as I think it is? Do you think there could be a better alternative design?

    Read the article

  • Invoking WCF functions using Reflection

    - by Jankhana
    I am pretty new to WCF applications. I have a WCF application that is using NetTcpBinding. I wanted to invoke the functions in WCF service using the System.Reflection's Methodbase Invoke method. I mean I wanted to Dynamically call the Function by passing the String as the Function name. Reflection works great for Web Service or a Windows application or any dll or class. So their is certain way to do this for WCF also but I am not able to find that. I am getting the Assembly Name than it's type everything fine but as we cannot create an instance of the Interface class I tried to open the WCF connection using the binding and tried to pass that object but it's throwing the exception as : "Object does not match target type." I have opened the connection and passed the object and type is of interface only. I don't know whether I'm trying wrong thing or in wrong way. Any idea how shall I accomplish this??? The NetTCPBinding all are properly given while opening the connection. And one more thing I am using WCF as a Windows Service using NETTCPBinding.

    Read the article

  • Very strange iSeries Provider behavior

    - by AJ
    We've been given a "stored procedure" from our RPG folks that returns six data tables. Attempting to call it from .NET (C#, 3.5) using the iSeries Provider for .NET (tried using both V5R4 and V6R1), we are seeing different results based on how we call the stored proc. Here's way that we'd prefer to do it: using (var dbConnection = new iDB2Connection("connectionString")) { dbConnection.Open(); using(var cmd = dbConnection.CreateCommand()) { cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "StoredProcName"; cmd.Parameters.Add(new iDB2Parameter("InParm1", iDB2DbType.Varchar).Value = thing; var ds = new DataSet(); var da = new iDB2DataAdapter(cmd); da.Fill(ds); } } Doing it this way, we get FIVE tables back in the result set. However, if we do this: cmd.CommandType = CommandType.Text; cmd.CommandText = "CALL StoredProcName('" + thing + "')"; We get back the expected SIX tables. I realize that there aren't many of us sorry .NET-to-DB2 folks out here, but I'm hoping someone has seen this before. TIA.

    Read the article

  • Android app resets on orientation change, best way to handle?

    - by Anthony Westover
    So I am making a basic chess app to play around with some various elements of android programming and so far I am learning a lot, but this time I am lost. When the orientation of the emulator changes the activity gets reset. Based on my research the same thing will happen anytime the application is paused/interrupted, ie. keyboard change, phone call, hitting the home key etc. Obviously, it is not viable to have a chess game constantly reset, so once again I find myself needing to learn how to fix this problem. My research brings up a few main things, overriding the onPaused method in my Activity, listening for Orientation, Keyboard changes in my manifest (via android:configChanges), using Parcelables, or Serialization. I have looked up a lot of sample code using Pacelables, but to be honest it is too confusing. Maybe coming back tomorrow with fresh eyes will be beneficial, but right now the more I look at Parcelables the less sense it makes. My application utilizes a Board object, which has 64 Cell Objects(in an 8x8 2D array), and each cell has a Piece Object, either an actual piece or null if the space is empty. Assuming that I use either Parcelable or Serialization I am assuming that I would have to Parcelize or Serialize each class, Board, Cell, and Piece. First and foremost, is Parcelable or Serialization even the right thing to be looking at for this problem? If so is either Parcelable or Serializable preferred for this? And am I correct in assuming that each of the three objects would have to be Parceled/Serialized? Finally, does anybody have a link to a simple to understand Parcelable tutorial? Anything to help me understand, and stop further headaches down the road when my application expands even further. Any help would be appreciated.

    Read the article

  • Get Classic ASP variable from posted JSON

    - by Will
    I'm trying to post JSON via AJAX to a Classic ASP page, which retrieves the value, checks a database and returns JSON to the original page. I can post JSON via AJAX I can return JSON from ASP I can't retrieve the posted JSON into an ASP variable POST you use Request.Form, GET you use Request.Querystring......... what do I use for JSON? I have JSON libraries but they only show creating a string in the ASP script and then parsing that. I need to parse JSON from when being passed an external variable. Javascipt var thing = $(this).val(); $.ajax({ type: "POST", url: '/ajax/check_username.asp', data: "{'userName':'" + thing + "'}", contentType: "application/json; charset=utf-8", dataType: "json", cache: false, async: false, success: function() { alert('success'); }); ASP file (check_username.asp) Response.ContentType = "application/json" sEmail = request.form() -- THE PROBLEM Set oRS = Server.CreateObject("ADODB.Recordset") SQL = "SELECT SYSUserID FROM dbo.t_SYS_User WHERE Username='"&sEmail&"'" oRS.Open SQL, oConn if not oRS.EOF then sStatus = (new JSON).toJSON("username", true, false) else sStatus = (new JSON).toJSON("username", false, false) end if response.write sStatus

    Read the article

  • Store comparison in variable (or execute comparison when it's given as an string)

    - by BorrajaX
    Hello everyone. I'd like to know if the super-powerful python allows to store a comparison in a variable or, if not, if it's possible calling/executing a comparison when given as an string ("==" or "!=") I want to allow the users of my program the chance of giving a comparison in an string. For instance, let's say I have a list of... "products" and the user wants to select the products whose manufacturer is "foo". He could would input something like: Product.manufacturer == "foo" and if the user wants the products whose manufacturer is not "bar" he would input Product.manufacturer != "bar" If the user inputs that line as an string, I create a tree with an structure like: != / \ manufacturer bar I'd like to allow that comparison to run properly, but I don't know how to make it happen if != is an string. The "manufacturer" field is a property, so I can properly get it from the Product class and store it (as a property) in the leaf, and well... "bar" is just an string. I'd like to know if I can something similar to what I do with "manufacturer": storing it with a 'callable" (kind of) thing: the property with the comparator: != I have tried with "eval" and it may work, but the comparisons are going to be actually used to query a MySQL database (using sqlalchemy) and I'm a bit concerned about the security of that... Any idea will be deeply appreciated. Thank you! PS: The idea of all this is being able to generate a sqlalchemy query, so if the user inputs the string: Product.manufacturer != "foo" || Product.manufacturer != "bar" ... my tree thing can generate the following: sqlalchemy.or_(Product.manufacturer !="foo", Product.manufacturer !="bar") Since sqlalchemy.or_ is callable, I can also store it in one of the leaves... I only see a problem with the "!="

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • Weird certificate error when trying to generate web service client from secure site

    - by Vlad
    Dear stack overflow. I get a weird error when trying to use AXIS1.4 Wsdl2Java tool to generate client code for the web service that is installed on the secure IIS site. When I run the tool I get the following SSL exception: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No name matching XXXXXXX.net found at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:174) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1 591) at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:187) at com.sun.net.ssl.internal.ssl.Handshaker.fatalSE(Handshaker.java:181) at com.sun.net.ssl.internal.ssl.ClientHandshaker.serverCertificate(Clien tHandshaker.java:975) at com.sun.net.ssl.internal.ssl.ClientHandshaker.processMessage(ClientHa ndshaker.java:123) at com.sun.net.ssl.internal.ssl.Handshaker.processLoop(Handshaker.java:5 16) at com.sun.net.ssl.internal.ssl.Handshaker.process_record(Handshaker.jav a:454) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.j ava:884) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(SS LSocketImpl.java:1096) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.startHandshake(SSLSocketIm pl.java:1123) Weird thing is that this error only occurs when I run WSDL2Java, and only for this particular server. I have another web server with the identical set-up and everything works fine there. I triple checked all the keystores and it looks like all the CA certificates are loaded correctly. I tried using another server with the identical setup, and was able to generate the client proxy code without any problems. Weird thing is that if I use the code generated from the other server against the weird server everything works fine. It is only Wsdl2Java that is giving me a problem.

    Read the article

  • Extjs to call a RESTful webservice

    - by VSC
    Hello, I am trying to make a RESTful webservice call using Extjs. Below is the code i am using: Ext.Ajax.request({ url: incomingURL , method: 'POST', params: {param1:p1, param2:p2}, success: function(responseObject){ var obj = Ext.decode(responseObject.responseText); alert(obj); }, failure: function(responseObject){ var obj = Ext.decode(responseObject.responseText); alert(obj); } }); but it does not work, the request is sent using OPTIONS method instead of POST. I also tried to do the same thing using below code but result is the same: var conn = new Ext.data.Connection(); conn.request({ url: incomingURL, method: 'POST', params: {param1:p1, param2:p2}, success: function(responseObject) { Ext.Msg.alert('Status', 'success'); }, failure: function(responseObject) { Ext.Msg.alert('Status', 'Failure'); } }); But when i tried to do the same thing using basic ajax call ( using the browser objects directly i.e. XMLHttpRequest() or ActiveXObject("Microsoft.XMLHTTP")) it works fine and i get the response as expected. Can anyone please help me, as i am not able to understand what i am doing wrong with extjs ajax call?

    Read the article

  • LGPL and Dual Licensing Ajax Library

    - by Thomas Hansen
    Hi guys, I'm the previous founder of Gaiaware and Gaia Ajax Widgets and when I used to work there we had this rhetoric (which I have confirmed with some very smart FOSS people is correct) that when using a GPL Ajax library you're basically "distributing" the JavaScript which in turn makes the GPL viral clause kick in and forces people to purchase a proprietary license if they're going to build Closed Source stuff... So now I'm the the LGPL world here with Ra-Ajax which is an LGPL licensed library and I've got no intentions of creating a GPL licensed library since I believe strongly in that the LGPL is the "enabler" of the Open Web to prevail. But something interesting have happened which I think might still give me a "business model" here which is the Linking clause of the LGPL which I think goes something like this (pseudo); "If you link to an LGPL licensed thing you get no restrictions on your own derived works"... But so we started creating something we're calling Ajax Starter-Kits which effectively is a "Project Kickstarter" where you can download a finished project/solution which basically enables you to start out with some pre-done boiler plate code for problems such as Ajax DataGrids, Ajax Calendar Applications, Ajax TreeView Applications etc. And the funny thing is that our users would NOT "link" to these, they would effectively BE our users applications... So to wrap up my question. Would this force users of our LGPL licensed Ajax Starter-Kits to LGPL license their own work? Basically if it does we have a business model (and I get very happy) if not I'd just have to hope people would still like to pay us those $29 for our Starter-Kits to support the project... ;) Help rewarded with extreme gratitude...

    Read the article

  • Why Does My Website Redirect me to my localhost?

    - by Noah Brainey
    Alright, my website has some issues that I'm not sure what's causing them. Visit this page http://online-file-sharing.net/tos.html and click one of the bottom footer links... it redirects you to your localhost in the address bar. I have no idea why it does this. I'm hosting this website on my own server, which is this computer, and using Xampp. If this information helps. Anyways any help would be greatly appreciated! I'm also using DYNDNS as my nameservers. I've already ask this question on superuser and webapps QnA sites neither could help. They said to come here. Another thing to note is that this website runs on one script and not multiple scripts (upload.cgi). However there are three files that aren't dynamic and aren't part of the upload.cgi file... these are about.html, browse.html and tos.html. Another thing to note is that my homepage which is upload.cgi can only be accessed by manually typing in online-file-sharing.net/cgi-bin/upload.cgi (which isn't it's real location but it seems to recognize it this way... but redirects me to my localhost). .htaccess file code: DirectoryIndex upload.cgi My upload.cgi path code: my $version = "4.14"; $ENV{PATH} = '/bin:/usr/bin'; delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; ($ENV{DOCUMENT_ROOT}) = ($ENV{DOCUMENT_ROOT} =~ /(.*)/); # untaint. #$ENV{SCRIPT_NAME} = '/cgi-bin/upload.cgi'; use lib './perlmodules'; #use Time::HiRes 'gettimeofday'; #my $hires_start = gettimeofday(); my (%PREF,%TEXT) = (); The script I'm using is FileChucker. I hope this information is enough to find an answer... if not please let me know and I'll post as much information as you need!

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >