Search Results

Search found 61944 results on 2478 pages for 'text database'.

Page 437/2478 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Useful Vim features

    - by Craig H
    Vim is my editor of choice, and I feel I am above average in my use of it. I do recognize, though, that the feature list of vim is huge. With this in mind, I was wondering what features you vim users out there use on a regular basis.

    Read the article

  • Is there any C/C++ editor in Linux that shows errors while typing

    - by MetallicPriest
    The Visual C++ editor has a great feature which is that it underlines errors with a red line while typing the code. So for example, if you are using a variable that is not declared, it will underline it with a red curly line. In this way, the programmer can resolve a lot of errors while typing and doesn't have to wait for compiling for noticing them. Now my question is, is there any editor for Linux that has this great feature?

    Read the article

  • which delimeter to use while spliting String

    - by London
    I need to split this line string in each line, I need to get the third word(film name) but as you see the delimeter is one big blank character in some cases its small like before the numbers at the end or its big as in front of numbers at front. I tried using string split with(" ") regex, and also \t but get the out of the bounds error. 400115305 Lionel_Atwill The_Song_of_Songs_(1933_film) 7587 400115309 Brian_Aherne A_Night_to_Remember_(1943_film) 7952 Did anyone have the same problem?

    Read the article

  • Fast and efficient way to read a space separated file of numbers into an array?

    - by John_Sheares
    I need a fast and efficient method to read a space separated file with numbers into an array. The files are formatted this way: 4 6 1 2 3 4 5 6 2 5 4 3 21111 101 3 5 6234 1 2 3 4 2 33434 4 5 6 The first row is the dimension of the array [rows columns]. The lines following contain the array data. The data may also be formatted without any newlines like this: 4 6 1 2 3 4 5 6 2 5 4 3 21111 101 3 5 6234 1 2 3 4 2 33434 4 5 6 I can read the first line and initialize an array with the row and column values. Then I need to fill the array with the data values. My first idea was to read the file line by line and use the split function. But the second format listed gives me pause, because the entire array data would be loaded into memory all at once. Some of these files are in the 100 of MBs. The second method would be to read the file in chunks and then parse them piece by piece. Maybe somebody else has a better a way of doing this?

    Read the article

  • How can I map a Windows group login to the dbo schema in a database?

    - by Christian Hayter
    I have a database for which I want to restrict access to 3 named individuals. I thought I could do the following: Create a local Windows group on the database server and add the named individuals to it. Create a Windows login in SQL Server mapped to the local Windows group. Map the login to the "dbo" schema in the database, so that the users can access all objects without having to qualify them with the schema name. When I try to do step 3, I get the following error: Msg 15353, Level 16, State 1, Line 1 An entity of type database cannot be owned by a role, a group, an approle, or by principals mapped to certificates or asymmetric keys. I have tried to do this via the IDE, the sp_changedbowner sproc, and the ALTER AUTHORIZATION command, and I get the same error each time. After searching MSDN and Google, I find that this restriction is by design. Great, that's useful. Can anyone tell me: Why this restriction exists? It seems very arbitrary. More importantly, can I accomplish my requirement some other way? Other info that might be pertinent: The server is fully up to date with service packs and hotfixes. All objects in the database are owned by the "dbo" schema, and it's not feasible to change that. The database is running in compatibility level 80, and it's not feasible to change that to 90 yet. I am free to make any other changes (within reason, depending on what they are).

    Read the article

  • SQL Server 2008 Bring Database Online trying to open a file from a drive that doesn't exist

    - by Nai
    This is my error I am facing TITLE: Microsoft.SqlServer.Smo Set offline failed for Database 'Go3D_Retailer ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) Unable to open the physical file "E:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\ftrow_Go3D_catalog.ndf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)". Database 'Go3D_Retailer' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. ALTER DATABASE statement failed. (Microsoft SQL Server, Error: 5120) Background to this error I've been trying to move my destination logshipping database to another physical server for analysis purposes. Because I do not have domain keys and active directory set up, I had to hack my process by using the same username/password for both the source and destination servers to get the process to work. Following that, I used this guy's solution to move the destination database to another server. However, this error occurs when I try to bring the database back online. I don't have an E drive on my server and I have no idea why it's trying to open a file from E drive. I have over a 100gb left on my hard disk so it's definitely not a space issue. This sounds like a bug... Any ideas?

    Read the article

  • SQL Server 2008: Getting Login failed for user "Domain\User". Failed to open the explicitly specified database [CLIENT: IP.ADD.RR.ESS]

    - by GodEater
    This is a very similar issue to " SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database " which unfortunately seems to have gone unsolved. My issue here is subtly different. Firstly the account failing login is not 'NT AUTHORITY\NETWORK SERVICE' - it's an actual domain account. Secondly, there are two machines involved - I gathered from the first question it was a single machine running both the IIS and SQL instances. The application which is trying to connect to the database is an ASP.NET one running on another server (if that makes any different, I'm not sure it does.) The ConnectionString being used in the web.config for the application is : data source=MySQLServer;initial catalog=MyDatabase;integrated security=sspi; And the Application Pool is set to NetworkService for Identity. So - in the web app, I get the following error : Cannot open database "MyDatabase" requested by the login. The login failed. Login failed for user 'MyDomain\WebServerMachineName$' In the SQL Server logs I see : Login failed for user 'MyDomain\WebServerMachineName$'. Reason: Failed to open the explicitly specified database. [CLIENT: Web.Server.IP.Address] Running this bit of SQL against the database in question : USE [MyDatabase] GO SELECT SDP.name AS [User Name], SDP.type_desc AS [User Type], UPPER(SDPS.name) AS [Database Role] FROM sys.database_principals SDP INNER JOIN sys.database_role_members SDRM ON SDP.principal_id=SDRM.member_principal_id INNER JOIN sys.database_principals SDPS ON SDRM.role_principal_id = SDPS.principal_id Gets me this result : MyDomain\WebServerMachineName$ WINDOWS_USER DB_DDLADMIN MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAREADER MyDomain\WebServerMachineName$ WINDOWS_USER DB_DATAWRITER Which appears to me to indicate I've got the permissions right. Anyone have any idea why it's not working, or how I can narrow the issue down some more?

    Read the article

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • In Google webmaster tools, can a "soft 404" be triggered by the text on the page?

    - by Stephen Ostermiller
    I just ran across an error in Google Webmaster Tools that I have never seen before. I manage the website for my local community band (I play trombone). One of the pages on the site is a list of our upcoming performances. It is powered by a WordPress events plugin that uses a database of upcoming events that are entered through the administration interface. We just finished up our summer and fall concerts and our next performance will be our Christmas concert. I hadn't gotten around to adding that into the website yet, so there are no upcoming events shown on the page. In fact the text on the page says: No upcoming events listed under Performance. Check out past events for this category or view the full calendar. Then in Google Webmaster Tools, this page is showing up as a "soft 404": The page is returning a 200 status and Google is indicating that he 404 is "soft". I wouldn't have expected Googlebot to be as sophisticated to parse that particular sentence. Is Googlebot able to detect that the text on the page indicates that there is currently not content and then treat it as a 404 page because of that? If Google is treating this page as a soft 404 because of the text on the page, does that mean that like regular 404 pages, the page won't show up in search results?

    Read the article

  • Lightest weight ubuntu desktop for text editing in big terminal windows?

    - by Kevin Pauli
    I have an older windows laptop onto which I'm installing ubuntu within a VM. My goal is just to use terminal-based linux tools such as vim and shell scripting. I don't give a hoot about any gui for this box. So I first installed ubuntu minimalcd and chose "Basic Ubuntu Server". Upon boot, the text-based terminal came up and I logged in, but the problem is it only gives me 80 columns. I want to do terminal mode vim but have a couple hundred columns to take advantage of my large monitor. If you happen to know how to do that, please see my question here . This post is assuming that the other question is not answerable, and that I will need a desktop to get more than 80 columns in a terminal window. So if that is the case, I want the lightest weight one possible, because this is older hardware and all I want is the ability to have nice big text-based terminal windows for editing text. From the ubuntu minimal CD, I see options for Edubuntu, Kubuntu, etc. Which one of the available desktops would be a good choice for my needs?

    Read the article

  • Considering Embedding a Database? Choose MySQL!

    - by Bertrand Matthelié
    The M of the LAMP stack and the #1 database for Web-based applications, MySQL is also an extremely popular choice as embedded database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover the top reasons why:   3,000 ISVs and OEMs rely on MySQL as their embedded database 8 of the top 10 software vendors and hundreds of startups selected MySQL to power their cloud, on-premise and appliance-based offerings Leading mobile and SaaS providers ensure continuous service availability and scalability with lower cost and risk using MySQL Cluster. Learn how you can reduce costs and accelerate time to market while increasing performance and reliability. Access white papers, webinars, case studies and other resources in our Resource Kit.  

    Read the article

  • Web Seminar - The Oracle Database Appliance: How to Sell a Unique Product!

    - by swalker
    Dear partner, You are exclusively invited to join us for a webcast, dedicated to Oracle’s EMEA Partners, on the Oracle Database Appliance value proposition, positioning and ecosystem – to help you capture new business and help your customers roll out their solutions fast, easily, safely and with maximum cost efficiency! Join us to learn about: ODA Benefits: Fast, Easy, Cost Efficient, Highly Reliable Feedback from early Customer Wins: What can we Learn? Objection Handling: Overcoming the most common customer questions Going beyond the Database: The ODA ECO System for applications, backup & more… When combined with your high-value services (e.g., migration, consolidation), the end result is a database system that you can use to grow the business in your existing accounts, or capture new business. Join us at the EMEA partner webcast hosted by Robert Van Espelo Cloud and Virtualization Leader, EMEA Business Development on Thursday, April 12, at 9:00am UK / 10:00am CET. The presentation will be given in English. To register for this webcast click here We look forward to talking to you on April 12! Best regards,Giuseppe Facchetti EMEA Partner Business Development Manager Oracle EMEA, Hardware Sales Paul LeonardEMEA Partner Marketing Manager Oracle EMEA, Systems Marketing

    Read the article

  • How can I design my classes to include calendar events stored in a database?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCellList() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many calendar events (such as appointments or schedules) stored in a database. My question is: which is the best way to manage these calendar events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • How to disable the text shadow in plasma widgets KDE 4.6?

    - by piedro
    I just switched completely to Kubuntu for the sake of the more suitable applications for my purposes and their overall integration in the system. But I'm not very happy with all those desktop effects and tranparent looks everywhere. Some things a matter of taste I guess. Some things just unbearable for me having weak eyes. One of them is that about every light theme seems to be using a text shadow effect on the plasma widgets. A white "spilled milk" effect underneath dark text on light grey or glassy background. Drives my eyes nuts! I can remove this effect on the desktop folder by unselecting "shadow" as text effect. I can't find any way to switch it of in the panel and the plasma widgets. My second, related question is, is there a plasma theme matching the oxygen look of the default desktop in light colors that uses the same colors as the ones chosen in the KDE color settings, - simple, opaque, no effects, gnome-like? Plz help someone! thx, piedro

    Read the article

  • How do I restore default menu/top panel text color in Gnome Classic?

    - by Kobby
    The default text/icon color in my top panel has changed from white to a very dark grey, making my menus virtually unreadable against the black of the Ambiance theme. This includes even the login screen menu. I used Gnome Tweak to change the theme to Adwaita, but while some text has gone light grey (e.g. Date/Time), the login menu text remains dark grey, as do most icons (e.g. dropbox, wireless, battery indicator...) in the top panel after I log in. I tried deleting the top panel altogether but the option of deleting under Super + Alt + Right Click is blocked off. I tried running a panel from the terminal, but it came up in strange colors too, plus icons had moved around and some parts of the panel were opaque and other parts transparent. Deleting the panel wouldn't solve the basic problem anyway, as my login menu would still be very dark grey and unreadable against the default (black) Ambiance background. I would like to keep Ambiance but I want to reset the color to default (white) again. Can anyone help me?

    Read the article

  • How can I design my classes for a calendar based on database events?

    - by Gianluca78
    I'm developing a web calendar in php (using Symfony2) inspired by iCal for a project of mine. At this moment, I have two classes: a class "Calendar" and a class "CalendarCell". Here you are the two classes properties and method declarations. class Calendar { private $month; private $monthName; private $year; private $calendarCellList = array(); private $translator; public function __construct($month, $year, $translator) {} public function getCalendarCells() {} public function getMonth() {} public function getMonthName() {} public function getNextMonth() {} public function getNextYear() {} public function getPreviousMonth() {} public function getPreviousYear() {} public function getYear() {} private function calculateDaysPreviousMonth() {} private function calculateNumericDayOfTheFirstDayOfTheWeek() {} private function isCurrentDay(\DateTime $dateTime) {} private function isDifferentMonth(\DateTime $dateTime) {} } class CalendarCell { private $day; private $month; private $dayNameAbbreviation; private $numericDayOfTheWeek; private $isCurrentDay; private $isDifferentMonth; private $translator; public function __construct(array $parameters) {} public function getDay() {} public function getMonth() {} public function getDayNameAbbreviation() {} public function isCurrentDay() {} public function isDifferentMonth() {} } Each calendar day can includes many events stored in a database. My question is: which is the best way to manage these events in my classes? I think to add a eventList property in CalendarCell and populate it with an array of CalendarEvent objects fetched by the database. This kind of solution doesn't allow other coders to reuse the classes without db (because I should inject at least a repository services also) just to create and visualize a calendar... so maybe it could be better to extend CalendarCell (for instance in CalendarCellEvent) and add the database features? I feel like I'm missing some crucial design pattern! Any suggestion will be very appreciated!

    Read the article

  • PHP Code (modules) included via MySQL database, good idea?

    - by ionFish
    The main script includes "modules" which add functionality to it. Each module is set up like this: <?php //data collection stuff //(...) approx 80 lines of code //end data collection $var1 = 'some data'; $var2 = 'more data'; $var3 = 'other data'; ?> Each module has the same exact variables, just the data collection is different. I was wondering if it's a reasonable idea to store the module data in MySQL like this: [database] |_modules |_name |_function (the raw PHP data from above) |_description |_author |_update-url |_version |_enabled ...and then include the PHP-data from the database and execute it? Something like, a tab-navigation system at the top of the page for each module name, then inside each of those tabs the page content would function by parsing the database-stored code of the module from the function section. The purpose would be to save code space (fewer lines), allow for easy updates, and include/exclude modules based on the enabled option. This is how many other web-apps work, some of my own too. But never had I thought about this so deeply. Are there any drawbacks or security risks to this?

    Read the article

  • How to adjust size of programatically created Bitmap to match text drawn on it?

    - by TooFat
    I have the following .ashx page that takes some query string parameters and returns a bitmap with the specified text written on it. The problem I have is that I am currently just manually setting the initial size of the bitmap at 100 X 100 when what I really want is to have the bitmap be just big enough to include all the text that was written to it. How can I do this? public void ProcessRequest (HttpContext context) { context.Response.ContentType = "image/png"; string text = context.Request.QueryString["Text"]; //set FontName string fontName; if (context.Request.QueryString["FontName"] != null) { fontName = context.Request.QueryString["FontName"]; } else { fontName = "Arial"; } //Set FontSize int fontEms; if (context.Request.QueryString["FontSize"] != null) { string fontSize = context.Request.QueryString["FontSize"]; fontEms = Int32.Parse(fontSize); } else { fontEms = 12; } //Set Font Color System.Drawing.Color color; if (context.Request.QueryString["FontColor"] != null) { string fontColor = context.Request.QueryString["FontColor"]; color = System.Drawing.ColorTranslator.FromHtml(fontColor); context.Response.Write(color.ToString()); } else { color = System.Drawing.Color.Red; } using (System.Drawing.Text.PrivateFontCollection fnts = new System.Drawing.Text.PrivateFontCollection()) using (System.Drawing.FontFamily fntfam = new System.Drawing.FontFamily(fontName)) using (System.Drawing.SolidBrush brush = new System.Drawing.SolidBrush(color)) using (System.Drawing.Bitmap bmp = new System.Drawing.Bitmap(100, 100)) { using (System.Drawing.Font fnt = new System.Drawing.Font(fntfam, fontEms)) { fnts.AddFontFile(System.IO.Path.Combine(@"C:\Development\Fonts\", fontName)); System.Drawing.Graphics graph = System.Drawing.Graphics.FromImage(bmp); graph.DrawString(text, fnt, brush, new System.Drawing.Point(0, 0)); string imgPath = System.IO.Path.Combine(@"C:\Development\MyPath\Images\Text", System.IO.Path.GetRandomFileName()); bmp.Save(imgPath); context.Response.WriteFile(imgPath); } } }

    Read the article

  • How can I format the text in a databound TextBox?

    - by Abe Miessler
    I have ListView that has the following EditItemTemplate: <EditItemTemplate> <tr style=""> <td> <asp:LinkButton ID="UpdateButton" runat="server" CommandName="Update" Text="Update" /> <asp:LinkButton ID="CancelButton" runat="server" CommandName="Cancel" Text="Cancel" /> </td> <td> <asp:TextBox ID="FundingSource1TextBox" runat="server" Text='<%# Bind("FundingSource1") %>' /> </td> <td> <asp:TextBox ID="CashTextBox" runat="server" Text='<%# Bind("Cash") %>' /> </td> <td> <asp:TextBox ID="InKindTextBox" runat="server" Text='<%# Bind("InKind") %>' /> </td> <td> <asp:TextBox ID="StatusTextBox" runat="server" Text='<%# Bind("Status") %>' /> </td> <td> <asp:TextBox ID="ExpectedAwardDateTextBox" runat="server" Text='<%# Bind("ExpectedAwardDate","{0:MM/dd/yyyy}) %>' onclientclick="datepicker()" /> </td> </tr> </EditItemTemplate> I would like to format the "ExpectedAwardDateTextBox" so it shows a short date time but haven't found a way to do this without going into the code behind. In the Item template I have the following line to format the date that appears in the lable: <asp:Label ID="ExpectedAwardDateLabel" runat="server" Text='<%# String.Format("{0:M/d/yyyy}",Eval("ExpectedAwardDate")) %>' /> And I would like to find a similar method to do with the insertItemTemplate.

    Read the article

  • Is this text wrapping technique possible in CSS and jQuery?

    - by alex
    I have built a sliding text thing for a website. http://www.solomonadventures.com/~new/adventure-tours/seafari-tours/ The background contains the menu (on the right hand side), and when it originally loads, I have placed an element to make the text look like it is wrapping around the menu. Now, I have a sliding text thing I was asked to implement. The buttons to use it are currently in the top left corner. My question is, when I slide the content down, am I able to somehow make the text still wrap around it? This is all I have thought of so far (all with trade offs) Make the text appear beneath the menu - no need to wrap Make the text as narrow to the beginning of the menu - no need to wrap Manually place placeholders in the text that make it line break so it appears to wrap - not elegant (site uses a CMS too) Is there any jQuery selector I could write that would allow me to select the paragraph from top (once slid to the top) or the top most text node (so I could do an after() to place a new placeholder element to force it to wrap?) Any other solutions? Many thanks.

    Read the article

  • emacs: How to intelligently handle buffer-modified when setting text properties?

    - by Cheeso
    The documentation on Text Properties says: Since text properties are considered part of the contents of the buffer (or string), and can affect how a buffer looks on the screen, any change in buffer text properties marks the buffer as modified. First, I don't understand that policy. Can anyone explain? The text props are not actually saved in the file, when the buffer is saved. So why mark the buffer as modified? For me, buffer-modified indicates "some changes have not yet been saved." but understanding the policy is just for my own amusement. More importantly, is there an already-established way that, in code, I can change syntax text properties on the text in a buffer, while keeping the buffer-modified flag set to whatever it was, prior to those changes? I'm thinking of something like save-excursion. It would be pretty easy to write, but this seems like a common case and I'd like to use the standard function, if possible. For more on the scenario - I have a mode that does a full text scan and sets syntax-tabe properties on the text. After opening a buffer, the scan runs, but it results in a buffer with buffer-modified set to t . As always, thanks.

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >