content
stringlengths 228
999k
| pred_label
stringclasses 1
value | pred_score
float64 0.5
1
|
---|---|---|
Module: ActionView::Helpers::TranslationHelper
Included in:
ActionView::Helpers
Defined in:
actionview/lib/action_view/helpers/translation_helper.rb
Instance Method Summary collapse
Instance Method Details
#localize(*args) ⇒ Object Also known as: l
Delegates to I18n.localize with no additional functionality.
See rubydoc.info/github/svenfuchs/i18n/master/I18n/Backend/Base:localize for more information.
73
74
75
# File 'actionview/lib/action_view/helpers/translation_helper.rb', line 73
def localize(*args)
I18n.localize(*args)
end
#translate(key, options = {}) ⇒ Object Also known as: t
Delegates to I18n#translate but also performs three additional functions.
First, it will ensure that any thrown MissingTranslation messages will be turned into inline spans that:
* have a "translation-missing" class set,
* contain the missing key as a title attribute and
* a titleized version of the last key segment as a text.
E.g. the value returned for a missing translation key :“blog.post.title” will be <span class=“translation_missing” title=“translation missing: en.blog.post.title”>Title</span>. This way your views will display rather reasonable strings but it will still be easy to spot missing translations.
Second, it'll scope the key by the current partial if the key starts with a period. So if you call translate(".foo") from the people/index.html.erb template, you'll actually be calling I18n.translate("people.index.foo"). This makes it less repetitive to translate many keys within the same partials and gives you a simple framework for scoping them consistently. If you don't prepend the key with a period, nothing is converted.
Third, it'll mark the translation as safe HTML if the key has the suffix “_html” or the last element of the key is the word “html”. For example, calling translate(“footer_html”) or translate(“footer.html”) will return a safe HTML string that won't be escaped by other HTML helper methods. This naming convention helps to identify translations that include HTML tags so that you know what kind of output to expect when you call translate in a template.
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# File 'actionview/lib/action_view/helpers/translation_helper.rb', line 36
def translate(key, options = {})
options[:default] = wrap_translate_defaults(options[:default]) if options[:default]
# If the user has specified rescue_format then pass it all through, otherwise use
# raise and do the work ourselves
options[:raise] ||= ActionView::Base.raise_on_missing_translations
raise_error = options[:raise] || options.key?(:rescue_format)
unless raise_error
options[:raise] = true
end
if html_safe_translation_key?(key)
html_safe_options = options.dup
options.except(*I18n::RESERVED_KEYS).each do |name, value|
unless name == :count && value.is_a?(Numeric)
html_safe_options[name] = ERB::Util.html_escape(value.to_s)
end
end
translation = I18n.translate(scope_key_by_partial(key), html_safe_options)
translation.respond_to?(:html_safe) ? translation.html_safe : translation
else
I18n.translate(scope_key_by_partial(key), options)
end
rescue I18n::MissingTranslationData => e
raise e if raise_error
keys = I18n.normalize_keys(e.locale, e.key, e.options[:scope])
('span', keys.last.to_s.titleize, :class => 'translation_missing', :title => "translation missing: #{keys.join('.')}")
end
|
__label__pos
| 0.534285 |
Legal
What are Phishing Emails and How Can You Recognize Them?
By Albatross Editorial Team
Keeping things secure is at the forefront of most thoughts and actions by lawyers and law firms. The practice of law by nature encircles the handling of sensitive and protected data; Therefore, it is the responsibility of all legal professionals to protect the data they have and are provided to the best of their ability.
Taking precautionary measures, staying up to date on potential security threats, and enacting security plans and protocols within legal practices are responsible and essential measures that should be taken. With technology continuously changing and developing, and with the use of technology becoming more prevalent in law, there’s a greater chance of lawyers and their clients becoming victim to cybersecurity threats; One of these threats being that of phishing.
Phishing was reported to be the top threat in security by those in high-level positions in IT; And, in 2018 there were reported phishing attacks by 62% of businesses. Phishing has become a pretty substantial problem for businesses and in particular law firms. According to a recent Law.Com article, “...data security experts said phishing schemes are the most common threat to law firms right now.”
So, what is phishing exactly? And, why is it such a considerable threat to law firms? Below, we’ll take a closer look at what phishing emails are, how they impact the legal industry, and how they can be recognized.
What are Phishing Emails?
As Cisco Systems explains, ”Phishing attacks are the practice of sending fraudulent communications that appear to come from a reputable source. It is usually done through email. The goal is to steal sensitive data like credit card and login information, or to install malware on the victim’s machine.”
Essentially, phishing can be conducted through smartphones, computers, tablets - any smart device that offers a means of communication, such as email or texting. The hacker will send a message to the victim, pretending to be someone else or a trusted company, and then proceeds to request private and confidential information from the victim that can then be used in fraudulent attempts. Sometimes, links clicked on in phishing emails can also download malware or viruses.
How Has Phishing Affected the Legal Industry?
According to an ABA report on cybersecurity in 2018, law firms are often viewed as “one-stop shops,” according to the FBI; This is because law firms not only hold large amounts of private data, but the data is on many different clients. In the ABA’s 2018 Legal Technology Survey Report, the number of reported cybersecurity breaches is staggering:
• 14% of solo firms reported breaches in 2018
• 24% of small law firms with 2-9 lawyers reported breaches in 2018
• 24% of small law firms with 10-49 lawyers reported breaches in 2018
• 42% of small law firms with 50-99 lawyers reported breaches in 2018
During a 2018 Futures Conference, speakers reported that law firms experience a particular weakness when it comes to emails due to phishing encompassing one of the more common cybersecurity threats experienced within the legal industry.
One example of a substantial security breach caused by phishing occurred in 2017 with the Jenner and Block law firm. During this incident, hundreds of employees, both former and current, had their tax forms exposed. This was a direct result of employees transmitting their information due to a fake request appearing to be from law firm management.
What Are the Different Types of Phishing and What Gives them Away?
In this case, knowledge of and familiarity with the different types of phishing attacks possible can make all the difference. To begin, there are five different phishing categories:
• Whaling - Whaling is a type of phishing attack that targets upper management within companies specifically, such as CEOs and CFOs.
• Spear Phishing - One of the most common forms of phishing, spear phishing targets users individually, often following social media and web research on the user.
• Vishing - This form of phishing is conducted verbally through phone calls.
• Search Engine Phishing - Search engine phishing is done by way of fake websites and links that can lead to malware downloads or the giving of private information to hackers.
• Smishing - Smishing is essentially SMS or text phishing. Texts with malicious links are sent, or private data is prompted to be given.
Within the five main categories, there are 14 different types of phishing attempts commonly seen:
1. Brand Impersonation - This type of phishing attack is conducted by the hacker, pretending to be a company and sending a large batch of emails to victims with similar demographics and preferences.
2. Email Spoofing - Email spoofing is done by impersonating people or companies that the victim is familiar with through email and asking for personal information and data.
3. URL Phishing - This is phishing conducted through links that can then infect the user’s device once clicked on.
4. Pop-Ups - Hackers can conduct phishing through pop-ups that appear when users visit specific webpages and will ask for pertinent information or redirect to a fake website.
5. Subdomain Attacks - This is done through message or email and provides a fake link requesting the victim click to provide or confirm urgently needed information.
6. Website Spoofing - This type of phishing is performed by hackers imitating real websites and URLs to trick victims into clicking malicious links or providing their personal information.
7. Search Engine Attack - Victims are fooled by fake click ads displayed on search engine results through this type of phishing. Victims are prompted to give out personal information through the link or can download malicious software.
8. Man-in-the-Middle - This is an especially devious phishing form in which the hacker intercepts communications between the victim and another party in order to steal information.
9. Scripting - This type of phishing is conducted by way of a coding script enacting through a legitimate website and then redirecting the user to a fake page.
10. Clone Phishing - This is conducted by the hacker copying a real email sent or received and then inserting a fake attachment or link to steal information.
11. Voice Phishing Attack - This version of phishing is conducted through phone calls. It encompasses the fraudulent impersonator calling the victim and pretending to be a trusted company or bank in order to gain private information or direct the victim to conduct specific actions (such as transferring funds).
12. Image Phishing - In image phishing, hackers embed or attach malicious viruses into images that can then be activated once the victim clicks on them or downloads them.
13. Malware Injection - This type of phishing is specifically centered around getting a victim to download viruses and other malware through emails in order to steal information, launch attacks, or otherwise hijack their system.
14. CEO Fraud - In this type of phishing, hackers pretend to be trusted persons in charge, like CEOs or COOs and then request information of victims.
While phishing attempts are often cleverly disguised, certain key giveaways can help potential victims recognize them. Here are a few:
• Erroneous spelling and/or grammatical errors
• A general ”pushy” or insistent tone
• Use of general introductions, instead of personal ones
• An offer of free items or services
• A claim of an account issue or suspicious activity and a request to fix it
• A request to confirm personal information or financial information
Steps to Prevent Phishing Attacks
In a FindLaw article discussing the vulnerability of law firms to phishing attacks, a 250ok study on emails showed that 62% of law firms aren’t doing enough to protect their firm's email communication. Furthermore, according to the 250ok report, an astonishing 91% of cyberattacks are a result of phishing attacks. So, what can lawyers and law firms do to change these numbers? Here are some tips:
• Stay Up-to-Date - Ensure that all legal staff and employees are kept up-to-date on different types of phishing techniques to look for.
• Browse Safely - Use internet browsers that have pre-installed anti-phishing toolbars; Most come equipped with this these days. It’s also important to ensure that all browsers used firm-wide are updated regularly.
• Beware of Links - Be careful when clicking on things and discourage staff from clicking too quickly. Hovering briefly over links to view more information or questioning when personal data is being asked for can make all the difference.
• Install Firewalls - Both network and desktop firewalls should be used firm-wide to help block and identify phishing attacks; This can be done through software or various hardware. IT professionals can be beneficial in ensuring that firewalls are installed and running smoothly.
• Avoid Clicking on Pop-Ups - Pop-up blockers should be used to ward off unwanted or malicious pop-ups that could be phishing attacks. When pop-ups do come through that appear suspicious, nothing within the box must be clicked, and the “x” in the top right corner should be clicked instead, or the window is closed through the task manager.
What to Do In Case a Phishing Attack is Believed to Have Occurred
If a phishing attack is known or suspected and resulted in data loss or theft, lawyers and law firms must act quickly. In fact, they are obligated to do so both morally and legally. When the discovery of a phishing attack is made, lawyers are expected to act quickly and investigate thoroughly. All clients and parties involved should be notified immediately of the breach, and be provided with information on steps being taken to repair the issue. Once repairs have been conducted following the attack, lawyers and law firms are then encouraged to revise firm security plans and consider hiring IT services to help prevent a similar attack from happening again.
In Conclusion
When it comes to phishing attacks, staying informed, cautious, and taking evasive and defensive actions are vital to protect law firms and their clients. As law firms continue to ease their way further into technology use, hopefully, the numbers indicative of data breaches will go down as better security practices and precautions are taken.
|
__label__pos
| 0.70461 |
okutane okutane - 3 months ago 34
Java Question
mybatis - fetching lists of properties for list of objects
Suppose I have following dto classes:
class Item {
int id;
List<Detail> details;
@Override
public String toString() {
return "{id: " + id + ", details: " + details + "}";
}
}
class Detail {
String name;
String value;
@Override
public String toString() {
return "{" + name + ": " + value + "}";
}
}
Is it possible to write a mapper xml to retrieve list of Items with properly filled Details and all the data will be retrieved with two queries (1st for items, 2nd for details). In the example below there will be N+1 queries (N - number of items).
Complete example (for sample schema, test data and usage)
Sandbox.java:
import org.apache.ibatis.io.Resources;
import org.apache.ibatis.session.*;
import java.sql.*;
import java.sql.Statement;
import java.util.List;
public class Sandbox {
public static void main(String... args) throws Throwable {
try (Connection connection = DriverManager.getConnection("jdbc:sqlite:sample.db")) {
try (Statement statement = connection.createStatement()) {
statement.executeUpdate("drop table if exists Item");
statement.executeUpdate("create table Item (id integer)");
statement.executeUpdate("insert into Item values(1)");
statement.executeUpdate("insert into Item values(2)");
statement.executeUpdate("drop table if exists Detail");
statement.executeUpdate("create table Detail (id integer, name string, value string)");
statement.executeUpdate("insert into Detail values(1, 'name', 'foo')");
statement.executeUpdate("insert into Detail values(1, 'purpose', 'test')");
statement.executeUpdate("insert into Detail values(2, 'name', 'bar')");
}
}
SqlSessionFactory sqlSessionFactory =
new SqlSessionFactoryBuilder().build(Resources.getResourceAsStream("mybatis-config.xml"));
try (SqlSession session = sqlSessionFactory.openSession()) {
MyMapper mapper = session.getMapper(MyMapper.class);
List<Item> items = mapper.selectItems();
System.out.println("items = " + items);
}
}
}
MyMapper.java:
import java.util.List;
public interface MyMapper {
List<Item> selectItems();
}
Mapper.xml:
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper
PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="MyMapper">
<resultMap id="ItemMap" type="Item">
<id column="id" property="id"/>
<collection column="id" property="details" select="selectDetails"/>
</resultMap>
<select id="selectItems" resultMap="ItemMap">
select * from Item
</select>
<select id="selectDetails" parameterType="int" resultType="Detail">
select * from Detail WHERE id=#{id}
</select>
</mapper>
mybatis-config.xml:
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE configuration
PUBLIC "-//mybatis.org//DTD Config 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-config.dtd">
<configuration>
<environments default="development">
<environment id="development">
<transactionManager type="JDBC"/>
<dataSource type="POOLED">
<property name="driver" value="net.sf.log4jdbc.DriverSpy"/>
<property name="url" value="jdbc:log4jdbc:sqlite:sample.db"/>
</dataSource>
</environment>
</environments>
<mappers>
<mapper resource="Mapper.xml"/>
</mappers>
</configuration>
pom.xml:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>ru.urururu</groupId>
<artifactId>mybatis-batching</artifactId>
<version>1.0-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>org.mybatis</groupId>
<artifactId>mybatis</artifactId>
<version>3.4.0</version>
</dependency>
<dependency>
<groupId>org.xerial</groupId>
<artifactId>sqlite-jdbc</artifactId>
<version>3.15.1</version>
</dependency>
<dependency>
<groupId>com.googlecode.log4jdbc</groupId>
<artifactId>log4jdbc</artifactId>
<version>1.2</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>1.7.21</version>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.6.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
Pau Pau
Answer
If you take a look the section Multiple ResultSets for Association in Mappers XML of the reference documentation it is explained:
Starting from version 3.2.3 MyBatis provides yet another way to solve the N+1 problem.
Some databases allow stored procedures to return more than one resultset or execute more than one statement at once and return a resultset per each one. This can be used to hit the database just once and return related data without using a join.
There is an example with this. So you would need a stored procedure with queries:
select * from Item
select * from Detail WHERE id=#{id}
Then the select will call the stored procedure as next:
<select id="selectItems" resultSets="item,details" resultMap="ItemMap">
{call getItemAndDetails(#{id,jdbcType=INTEGER,mode=IN})}
</select>
Finally the resultmap:
specify that the "details" collection will be filled out of data contained in the result set named "details"
Your collection tag in the result map would be something as next:
<collection property="details" ofType="Detail" resultSet="details" column="id" foreignColumn="foreign_id">
|
__label__pos
| 0.791357 |
python tkinter entry javatpoint
Posted on
If Python has not been compiled against Tk 8.5, this module can still be accessed if Tile has been installed. –!Tk is a graphics library widely used and available everywhere •!Tkinter is included with Python as a library. All rights reserved. How to create a virtual environment in Python, How to convert list to dictionary in Python, How to declare a global variable in Python, Which is the fastest implementation of Python, How to remove an element from a list in Python, Python Program to generate a Random String, How to One Hot Encode Sequence Data in Python. It is used to create a separate window container. It is used to link the entry widget to a horizontal scrollbar. Here the entry widget in tkinter is user for building customizable textboxes in the GUI interfaces which are being developed. Tkinter is Python's standard GUI (graphical user interface) package. There are various widgets like button, canvas, checkbutton, entry, etc. Our Tkinter tutorial is designed for beginners and professionals. It is used to make the entry scrollable horizontally. Steps to Create an Entry Box using Tkinter Step 1: Create the Canvas. It provides the scrollbar to the user so that the user can scroll the window up and down. It selects all the characters from the beginning to the specified index. JPython: It is the Python platform for Java that is providing Python scripts seamless access o Java class Libraries for the local machine. The Entry widget is used to accept single-line text strings from a user. Python Tkinter entry is amongst the most generally used graphical user interface in python for designing textboxes in GUIs. The method Erase text, delete Entry, and destroy the widget() through Button(command) Out of all the GUI methods, Tkinter is the most commonly used method. simpleapp_wx inherits from wx.Frame, so we have to call the wx.Frame constructor (wx.Frame.__init__()). Its default value is FLAT. GUI examples in Windows 10 Probably one of the most common things to do when using a Graphical User Interface (GUI) is to display something from an entry from the user. The code created Label, Entry, and button with defined parameters of row and colum in grid. Add the widgets like labels, buttons, frames, etc. It is the most commonly used toolkit for GUI programming in Python. The Entry widget is used to accept single-line text strings from a user. Python offers multiple options for developing a GUI (Graphical User Interface). from tkinter import * Here, we are creating our class, Window, and inheriting from the Frame class. You can update the widget programmatically to, for example, provide a readout […] Assign functions to handle GUI events in Python using the Tkinter graphical interface package. Advertisements. Aber alleine lernen heißt meistens viel Zeit investieren. If you are using Python 3.x version, you have to more careful about to clear the entry box - e.delete(0, tkinter.END) – Ravi K Jan 24 '20 at 15:13 add a comment | 12 Python Programming also uses very simple and concise syntax and dynamic typing. Mail us on [email protected], to get more information about given services. If you want to display multiple lines of text that can be edited, then you should use the Text widget. The Tkinter geometry specifies the method by using which, the widgets are represented on display. A list of possible options that can be passed inside the grid() method is given below. A label is a text used to display some message or information about the other widgets. Widgets are standard graphical user interface (GUI) elements, like different kinds of buttons and menus. The Entry widget is used to provde the single line text-box to the user to accept a value from the user. The width of the border to display around the selected task. Code language: Python (python) In this syntax: The container is the parent frame or window. It represents the value indicating the total width of the insertion cursor. The Entry widget is used to provde the single line text-box to the user to accept a value from the user. Python tkinter.Entry() Examples The following are 30 code examples for showing how to use tkinter.Entry(). Call the main event loop so that the actions can take place on the user's computer screen. The syntax to use the pack() is given below. The text written inside the entry box will be automatically copied to the clipboard by default. Aimed at beginner programmers or people that has no programming experience. Python 3, Tkinter 8.6. In programming, an event is something that occurs within an application's environment such as a mouse click, key press or changes in the GUI. © Copyright 2011-2018 www.javatpoint.com. A list of possible options is given below. It is used to change the insertion cursor position. ; Si vous souhaitez afficher une ou plusieurs lignes de textes qui ne peuvent pas être directement modifiées par l’utilisateur, voir Label - Étiquettes. The first parameter is what to add, the text parameter defines what to place next to it. It selects the characters to exist between the specified range. Tkinter: It is easiest to start with. Below is a simple example that allows the user to input text in a Tkinter Entry field and when they … Continue reading "How to Display an Entry in a Label – Tkinter Python 3" You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. Duration: 1 week to 2 week. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. The python Tkinter provides the following geometry methods. It represents a non-negative integer value indicating the number of milliseconds the insertion cursor should remain "off" in each blink cycle. In this tutorial, we shall learn how to implement Button in Python GUI using tkinter Python library. An empty Tkinter top-level window can be created by using the following steps. In python using Tkinter, there are two types of Input Box first is Tkinter Entry Box and Second is Tkinter Text. A list of possible options is given below. A LabelFrame is a container widget that acts as the container. •!Tkinter is a Python interface to the Tk graphics library. The Checkbutton is used to display the CheckButton on the window. that are used to build the python GUI applications. A GUI is a hierarchy of objects: A button may be contained in a pane which is contained in a tab which is contained in a window, etc. It represents the color to use as background in the area covered by the insertion cursor. JavaTpoint offers college campus training on Core Java, Advance Java, .Net, Android, Hadoop, PHP, Web Technology and Python. This module is used to display the message-box in the desktop based applications. In this tutorial, we will learn how to create an Entry widget and how to use it read a string from user. Entry - Champs de saisie¶. Python with Tkinter outputs the fastest and easiest way to create GUI applications. Unlike java, Tkinter does not have separate multiple line entry widgets. We can set the exportselection to 0 to not copy this. It is a standard Python interface to the Tk GUI toolkit shipped with Python. (Oct-27-2020, 04:06 PM) Larz60+ Wrote: Any validation must be written by you. A Tkinter application runs most of its time inside an event loop, which is entered via the mainloop method. I can speak to this on a Mac using Python 3.7, at any rate: copying only copies the *'s, so the security seems adequate for most purposes. Let's discuss each one of them in detail. This chapter is also available in our English Python tutorial: Tkinter Entry Widgets Python-Kurse. It is used to show the entry text of some other type instead of the string. ; Note that if you want to enter multi-line text, you should use the Text widget.. To get the current text of a Entry widget as a string, you use the get() method: It can be defined as a container to which, another widget can be added and organized. Previous Page. The former method using Tk 8.5 provides additional benefits including anti-aliased font rendering under X11 and window transparency (requiring a composition window manager … JPython: It is the Python platform for Java that is providing Python scripts seamless access o Java class Libraries for the local machine. Mail us on [email protected], to get more information about given services. Here, the user is provided with various options and the user can select only one option among them. Un champs de saisie Entry est utile pour permettre à l’utilisateur de modifier une ligne de texte.. Si vous souhaitez afficher plusieurs lignes de textes modifiables, voir Text - Éditeurs de texte. I am especially worried about ease-of-use. Python Tkinter Entry widget is used to take input from the user through the user interface. Related course: Python Desktop Apps with Tkinter . The place() geometry manager organizes the widgets to the specific x and y coordinates. > Circumvent the button. Developed by JavaTpoint. If you want to display multiple lines of text that can be edited, then you should use the Text widget. It is used to delete the specified characters inside the widget. The ListBox widget is used to display a list of options to the user. Both Tk and tkinter are available on most Unix platforms, as well as on Windows systems. ; The options is one or more keyword arguments used to configure the Entry widget. It represents a non-negative value indicating the width of the 3-D border to draw around the insertion cursor. Python provides the standard library Tkinter for creating the graphical user interface for desktop based applications. It is used to provide the slider to the user. Advertisements. Python Programming language uses a simple object-oriented programming approach and very efficient high-level data structures. The Button is used to add various kinds of buttons to the python application. It is a widely used widget and can be found on any application irrespective of the programming languages. Tkinter in Python comes with a lot of good widgets. A list of possible options that can be passed in pack() is given below. The width of the displayed text or image. A simple box is provided on the interface where the user can input text. These examples are extracted from open source projects. Previous Page. Duration: 1 week to 2 week. Events can be key presses or mouse operations by the user. to the window. Before learning Tkinter, you must have the basic knowledge of Python. To … For example, the password is typed using stars (*). JavaTpoint offers too many high quality services. Out of all the GUI methods, Tkinter is the most commonly used method. Please mail your requirement at [email protected]. If you want to display multiple lines of text that can be edited, then you should use the Text widget. How to create a virtual environment in Python, How to convert list to dictionary in Python, How to declare a global variable in Python, Which is the fastest implementation of Python, How to remove an element from a list in Python, Python Program to generate a Random String, How to One Hot Encode Sequence Data in Python. It is used to insert the specified string before the character placed at the specified index. Next Page . Developed by JavaTpoint. It is the most commonly used toolkit for GUI programming in Python. It specifies the type of the border. Please mail your requirement at [email protected]. We can use the Entry widget to accept the text strings from the user. Next Page . Let's walk through each step to making a tkinter window: Simple enough, just import everything from tkinter. It includes the selection of the character present at the specified index. I am a Python beginning self-learner, running on MacOS. I'm making a program with a text parser GUI in tkinter, where you type a command in a Entry widget, and hit a Button widget, which triggers my parse() funct, ect, printing the results to a Text widget, text-adventure style. Syntax – tkinter Button. In this example when the function fun() is called, it creates an instance of class B which passes itself to class A, which then sets a reference to class B and resulting in a circular reference.. Generally, Python’s garbage collector which is used to detect these types of cyclic references would remove it but in this example the use of custom destructor marks this item … Following methods provided by the user allows user to accept single-line text field to user... Instead of the character before which, another widget can be created by using the following sytnax for... Clears the selection of the first parameter is what to place the widgets are used add... Using stars ( * ) exist between the specified string before the character placed at specified! Inherits from wx.Frame, so we have to call the main event loop so that the can... Boxes, buttons, frames, etc the checkbutton is used to python tkinter entry javatpoint GUI applications Java,,..., frames, etc interface in Python using Tkinter Step 1: create Canvas. Good widgets and will accept anything that is providing Python scripts seamless o. Python ist leicht zu erlernen to accept a value from the user to accept single-line text field to the x... Designing textboxes in GUIs about the other widgets comes with a lot good... Toolkit for GUI programming in Python for designing textboxes in GUIs key presses or operations... Actions can take place on the computer GUI toolkit shipped with Python Tkinter is Python python tkinter entry javatpoint standard GUI ( user. Examples the following steps scrollbar to the user columns as the options is the. For this purpose, but it is a graphics library to it is maintained ActiveState! The python tkinter entry javatpoint Frame or window has the input focus Tkinter top-level window can key. Also available in our English Python tutorial: Tkinter Entry widget in the GUI methods, Tkinter is standard! Includes the selection if some selection has been installed strings from a user for beginners and professionals,,. Options in the GUI methods, Tkinter does not have the basic knowledge of Python Tkinter Entry box to instance! Tkinter in Python use as background in the form patterns and advanced concepts of Python ; it is used make. 'S standard GUI ( graphical user interface ) package loop so that the actions can take place on the up! The pack ( ) geometry python tkinter entry javatpoint organizes the widgets in the desktop based applications platform for Java that providing. Is yet another example of a widget are generally added in the less manner! Up and has vast capabilities is programmed on the window up and down items, such Entry. ( Python ) in this tutorial, we shall learn how to use for the local machine of... Added and organized running on MacOS window up and has vast capabilities a string from user on user! Through each Step to making a Tkinter window: simple enough, just import everything from Tkinter we! A complex task is more than capable of doing it height values: Python ( )... For building customizable textboxes in the GUI to insert the specified string before the present... You ever wanted to know how your application interactive Tkinter in Python comes with a lot good! If this option is zero, then the cursor does n't blink: is. Option is zero, then you should use the text from the Frame class possible to bind Python and... Edited, then you should use the text widget input from the Frame class: Python ( Python ) this! Gehen wir bei unserem Online-Kurs aus, denn Python ist leicht zu erlernen 's standard GUI ( graphical interface. That is providing Python scripts seamless access o Java class Libraries for the traversal region! Python beginning self-learner, running on MacOS by Tkinter is user for building customizable in. Used by us through the use special functions language uses a simple object-oriented programming approach and very high-level... To make the Entry widget span ( width ) or rowspan ( height ) of a Python beginning self-learner running... Access o Java class Libraries for the traversal highlight rectangle that is drawn the. Running on MacOS ) package ) Larz60+ Wrote: any validation must be written by you you. Easy to pick up and has vast capabilities specified range ( width ) or rowspan ( ). Character written at the specified string before the character specified by the user Tkinter window: simple,. The pack ( ) geometry manager organizes the widgets to the user can input text the by! Java class Libraries for the local machine Python ( Python ) in this tutorial, shall. Not copy this following sytnax, buttons, charts and more, on! Color will normally override either the normal background for the traversal highlight region when widget... Are 30 code Examples for showing how to use the text widget a Python beginning,! Both, because it is used to create an Entry widget is to! Raw widget and can be key presses or mouse operations by the insertion cursor have Python installed from.... Fastest and easiest way to create an Entry box using Tkinter Python library widget is to. Python platform for Java that is drawn around the insertion cursor this tutorial, we are creating class. Is an easy task GUI applications very efficient high-level data structures the syntax to use the widget! Be automatically copied to the Python platform for Java that is providing Python scripts seamless access o Java class for... Auch alleine lernen, denn er ist zum Selbststudium geeignet programming experience character present at the index! Key presses or mouse operations by the Entry widget is used to make the widget!, Entry, etc type in the form patterns how easy it is a one-liner wherein! Change the insertion cursor Second is Tkinter Entry box and Second is Tkinter Entry Python-Kurse. This Tkinter tutorial is designed to help beginners and professionals them in detail does n't blink: is. Import everything from Tkinter import * here, the password is typed stars. ) in this syntax: the python tkinter entry javatpoint you to watch this class, window, and inheriting from the 's... Rows and columns as the options is one the building block that makes your application interactive aus, denn ist... And Python in GUIs have you ever wanted to know how your application interactive draw the Canvas your. Integer value indicating the number of milliseconds the insertion cursor should remain off. Another widget can be added and organized very popularly used for this,! It selects all the GUI methods, Tkinter does not have separate python tkinter entry javatpoint... That can be edited, then you should use the following sytnax used... Programming approach and very efficient high-level data structures are less and widgets are represented on display is programmed on user... Represents the color to use as background in the GUI interfaces which are being developed them in detail to the. Dot, etc is included with Python as a container to which, the widgets are standard user! Of some other type instead of the programming languages Canvas is your display where can. Must use the text widget does not have separate multiple line Entry widgets )! Of an application 1: create the Canvas on the window very efficient high-level structures! Anything that is drawn around the widget ; the options in the area covered by the Entry widget accept. Represents the color to use it read a string from user method call selection of forms! Creating the graphical user interface for desktop based applications with Python Tkinter is an Entry box you! As on Windows systems tabular form any problem in this tutorial, we must use the (. Introduced in Tk 8.5, this module can still be accessed if Tile has been installed ( *.... Values: Python ( Python ) in this tutorial, we must use the (. Build the Python platform for Java that is providing Python scripts seamless access o Java class Libraries the. Will normally override either the normal background for the local machine available on most Unix,... Some selection has been installed placed at the specified index exist between the specified.. Python functions and methods to configure the data written inside the widget,.: the container display in the area covered by the Entry graphics widely. Python auch alleine lernen, denn er ist zum Selbststudium geeignet actions take!, frames, etc enter text strings from a user insert the specified index select. Integer value indicating the total width of the programming languages Tk 8.5, this module still. Type set to the user 's computer screen not part of Python Tkinter man Python auch lernen... Possible options that can be added and organized checkbutton, Entry, etc as well as Windows... Walk through each Step to making a Tkinter window: simple enough, import... Python beginning self-learner, running on MacOS ; it is to program the! Can place items, such as Entry boxes, buttons, charts more... The Menubutton is used to make the Entry widget options is one the building that. Text via GUI application place next to it cursor position of an application of... Python for designing textboxes in the GUI interfaces which are being developed text used to display the checkbutton the. Functions to handle GUI events in Python using the Tkinter package ( “ Tk interface )... Of values in contact form is any mistake, please post the in... An application //www.javatpoint.com/python-random-module Entry - Champs de saisie¶, running on MacOS to create GUI applications simpleapp_wx inherits from,. Options and the user comes with a lot of good widgets be key or... Before learning Tkinter, you must have the input focus steps to create GUI applications each one of them detail... To not copy this the method by using the following methods provided the! Entry widgets are used to get the text strings of values, Hadoop, PHP, Web Technology Python...
Lol Iron Elixir, Brother Of Jared Timeline, Triquetra Dark Wallpaper, Ntu Unrestricted Electives With No Finals, German Consulate Chennai Holidays, Salt Lake Tribune, Mashpee Police Reports 2020, Call Me When You're Sober Guy, City Of Aurora, Il Dog Registration,
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.866838 |
THE WEEKEND SALE
GET ADD-ONS BUNDLE WITH 34% OFF
ONLY 9 HOURS LEFT
GET ADD-ONS BUNDLE – FROM ONLY $39
DISCOUNT CODE APPLIED AUTOMATICALLY
Using shortcodes in template files
You can use shortcodes in template files. To do this you should use a special WordPress function do_shortcode().
Table of contents
1. Simple example of use
2. Shortcodes with attributes
3. Use of variables
4. Nested shortcodes
Simple example of use
This is the simplest example of using do_shortcode() function.
echo do_shortcode( '[gallery]' );
Shortcodes with attributes
do_shortcode() function works with shortcode attributes.
Please note the different quotation marks (“double” and ‘single’) used for the line with shortcode and for the shortcode attribute’s meaning. Example of shortcode with attributes:
echo do_shortcode( '[gallery ids="1, 2, 3"]' );
Use of variables
You can write shortcodes to the variable before calling do_shortcode() function. Here are some examples:
$content = 'Hello';
$my_shortcode = '[box]' . $content . '[/box]';
echo do_shortcode( $my_shortcode );
$color = '#ffcc00';
$content = 'Hello';
echo do_shortcode( '[box color="{$color}"] {$content} [/box]' );
Nested shortcodes
do_shortcode() function allows you to use nested shortcodes as well. Here are some examples:
echo do_shortcode( '[box] Hello, [username] [/box]' );
$content = 'Hello, [username]';
$shortcode = '[box]' . $content . '[/box]';
echo do_shortcode( $shortcode );
Important: make sure that opening and closing shortcode tags are used within a single do_shortcode() call. See examples below.
// This code WON'T work
echo do_shortcode( '[su_box]' );
echo do_shortcode( '[/su_box]' );
// This code WILL work
$my_shortcode = '[su_box]';
$my_shortcode .= 'Box content';
$my_shortcode .= '[/su_box]';
echo do_shortcode( $my_shortcode );
Helpful?
🤝 Thank you!
|
__label__pos
| 0.641171 |
Exercise D3: Adding Applications from the Farm
Horizon Cloud Service can auto-discover applications installed on the farm, or you can manually specify an application. Select the applications to be published, and assign them to end users or groups.
1. In the Horizon Cloud Service Administration Console navigation bar, click Inventory.
2. In the Inventory menu, select Applications.
3. In the Applications window, click New.
2. Select Auto-Scan from Farm
In the New Application window, under Auto-Scan from Farm, click Select.
3. Provide Definition Information
1. In the New Application window, provide the Definition information:
• Location: Select a location from the pop-up menu.
• Pod: Select the pod containing the farm you want to choose.
• Farm: Select the farm.
2. In the lower right corner, click Next.
4. Select the Applications to Publish
1. In the Applications tab, select the applications to be published.
2. In the lower right corner, click Next.
5. Provide Attributes
1. In the Attributes tab, provide the appropriate attributes.
2. In the lower right corner, click Next.
6. Verify the Summary Information
1. In the Summary tab, review to verify that the selections are correct and complete.
2. In the lower right corner, click Submit.
7. Verify Addition of New Applications
In the Applications window, the green banner verifies that the new applications were added successfully, and the green dots indicate that each application is active.
For more information, see VMware Horizon Cloud Service on Microsoft Azure Administration Guide and search the guide for Importing New Applications from an RDSH Farm Using Auto-Scan from Farm.
After you finish adding applications from the farm, proceed to the next section to explore assigning desktops and applications to users and groups.
0 Comments
Add your comment
E-Mail me when someone replies to this comment
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
|
__label__pos
| 0.933686 |
How to Change the Text Direction in Google Docs for PC and Mobile
How to Change the Text Direction in Google Docs for PC and Mobile
When typing on Google Docs, the text direction is left to right. It starts at the left margin and flows until the right margin. This default is because most countries read text left to right, although it may vary in some countries.
How to Change the Text Direction in Google Docs for PC and Mobile
Google also has a setting for changing the text direction. This setting allows the text direction to start from right to left. Here’s how to change the text direction in Google Docs using your PC or mobile.
How to Change the Text Direction in Google Docs Using a PC
In Google Docs, you can change text direction on the canvas and in a table. To do this, update the text direction controls in Google Docs.
Update your Google Docs settings to change the text direction by following these steps:
1. Open Google Docs and sign in.
2. Click the Menu icon on the top-left corner of the homepage.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 3: Click the Settings option.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 4: Under Language Settings, check ‘Always show right-to-left controls.’
How to Change the Text Direction in Google Docs for PC and Mobile
Step 5: Click OK to save your changes.
After updating settings to show the right-to-left controls, here’s how to change the direction of text in a paragraph:
Step 1: Open Google Docs and sign in.
See also How to Clear the Cache in Google Drive and Docs
Step 2: If creating a new document, click on the Blank template under ‘Start a new document’. Otherwise, if changing the text direction in an existing document, click on it from your Recent documents.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 3: In the Google Docs file, highlight the text you want to change the direction of.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 4: On the Google Docs toolbar, find the Paragraph icon with text direction arrows. Click on the icon to set the text direction to right-to-left.
How to Change the Text Direction in Google Docs for PC and Mobile
How to Change the Text Direction in a Table
To make the content more concise and impactful, it is important to review the provided text and eliminate redundant words or phrases. Look out for repetitive ideas, redundant adjectives, or any phrasing that can be simplified without losing the original meaning. The objective is to enhance readability and clarity while maintaining the integrity and tone of the original text.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 3: On the Google Docs file, highlight the table you want to change the text direction for.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 4: Click the Format tab on the Ribbon.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 5: Click the Table menu and select Table properties. This will launch a Table properties side panel.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 6: Select the Table drop-down from the properties side panel.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 7: Select ‘right-to-left’ in the Column order section.
How to Change the Text Direction in Google Docs for PC and Mobile
How to Change the Text Direction in Google Docs Using a Mobile
To write, view, or edit text in the right-to-left direction, activate the controls from the Google Docs settings on your PC. After activating the setting, you can change the direction of text on the canvas and in a table.
See also How to Cancel Telegram Premium Subscription on Desktop and Mobile
To change the text direction of a paragraph on your Android or iPhone, follow these steps:
1. Open the Google Docs app on your phone.
2. Make sure your phone has the latest version of the app.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 2: Open the document you want to change the text direction.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 3: Tap the Edit icon at the bottom of the app.
How to Change the Text Direction in Google Docs for PC and Mobile
To review the text and eliminate redundant words or phrases, follow these steps:
Step 1: Read through the text and identify any repetitive ideas or redundant adjectives.
Step 2: Look for phrasing that can be simplified without altering the original meaning.
Step 3: Remove any unnecessary words or phrases to enhance readability and clarity.
Step 4: Tap the Format icon at the top of the app.
By following these steps, you can effectively refine the editorial content and improve language efficiency.
How to Change the Text Direction in Google Docs for PC and Mobile
STEP 5: Tap the Paragraph menu.
To simplify, tap the Paragraph menu.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 6: Look for the Paragraph icon with text direction arrows. Click on the icon with ‘Set text direction to right-to-left’.
To change the text direction of text in table columns from your mobile, follow these steps:
Step 1: Open the Google Docs app on your phone.
Step 2: Access the document you want to edit.
Step 3: Tap on the table you want to modify.
Step 4: Tap on the cell containing the text you want to change the direction of.
Step 5: Tap on the “Format” option at the top of the screen.
Step 6: Tap on “Text Direction” from the dropdown menu.
See also How to Open Document Formats in Chrome and Firefox
Step 7: Select the desired direction for the text (e.g., left-to-right or right-to-left).
Step 8: Repeat steps 4-7 for any additional cells in the table.
By following these steps, you can easily change the text direction of text in table columns on your mobile device using the Google Docs app.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 2: Open the document you want to change the text direction of.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 3: Tap the Edit icon at the bottom of the app.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 4: Tap a table.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 5: Tap the Format icon at the top of the app.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 6: Tap Table menu.
How to Change the Text Direction in Google Docs for PC and Mobile
Step 7: Look for the Table icon with text direction arrows. Click on the ‘right-to-left’ icon.
Rotating an Image in Google Docs
In Google Docs, you can rotate images. This is useful for correcting bad angles. Use the Drawing Tool or the Rotate option on the Google Docs Ribbon.
Leave a Comment
|
__label__pos
| 0.870791 |
lxcyprp.dll
Process name: lxcrprp.dll
Application using this process: prop DLL
Recommended: Check your system for invalid registry entries.
lxcyprp.dll
Process name: lxcrprp.dll
Application using this process: prop DLL
Recommended: Check your system for invalid registry entries.
lxcyprp.dll
Process name: lxcrprp.dll
Application using this process: prop DLL
Recommended: Check your system for invalid registry entries.
What is lxcyprp.dll doing on my computer?
lxcyprp.dll is a lxcrprp.dll belonging to prop DLL from Lexmark International, Inc. Non-system processes like lxcyprp.dll originate from software you installed on your system. Since most applications store data in your system's registry, it is likely that over time your registry suffers fragmentation and accumulates invalid entries which can affect your PC's performance. It is recommended that you check your registry to identify slowdown issues.
lxcyprp.dll
In order to ensure your files and data are not lost, be sure to back up your files online. Using a cloud backup service will allow you to safely secure all your digital files. This will also enable you to access any of your files, at any time, on any device.
Is lxcyprp.dll harmful?
lxcyprp.dll has not been assigned a security rating yet.
lxcyprp.dll is unrated
Can I stop or remove lxcyprp.dll?
Most non-system processes that are running can be stopped because they are not involved in running your operating system. Scan your system now to identify unused processes that are using up valuable resources. lxcyprp.dll is used by 'prop DLL'.This is an application created by 'Lexmark International, Inc.'. To stop lxcyprp.dll permanently uninstall 'prop DLL' from your system. Uninstalling applications can leave invalid registry entries, accumulating over time.
Is lxcyprp.dll CPU intensive?
This process is not considered CPU intensive. However, running too many processes on your system may affect your PC’s performance. To reduce system overload, you can use the Microsoft System Configuration Utility to manually find and disable processes that launch upon start-up.
Why is lxcyprp.dll giving me errors?
Process related issues are usually related to problems encountered by the application that runs it. A safe way to stop these errors is to uninstall the application and run a system scan to automatically identify any PC issues.
Process Library is the unique and indispensable process listing database since 2004 Now counting 140,000 processes and 55,000 DLLs. Join and subscribe now!
Toolbox
ProcessQuicklink
|
__label__pos
| 0.898687 |
Curipod placeholder
In a few words, what is an
0
0
Profile picture of djoanarosej
djoanarosej
Updated 5 months ago
1. Word cloud
120 seconds
In a few words, what is an Operational Definition?
2. Slide
60 seconds
Technical definition is the scientific explanation behind an idea, product, or process. Operational definition is how a concept is defined and measured for the purpose of research. 10th grade students should use both to understand concepts in depth.
Technical & Operational Definition
3. Slide
60 seconds
Technical Definition: A technical definition is a type of definition that explains how something works, typically from the perspective of a user or engineer. It is typically more specific and detailed than an operational definition. Operational Definition: An operational definition is a definition of a concept or term in terms of the operations or procedures used for measurement or research. It is most commonly used in the sciences and is used to precisely describe the operations and procedures used to define a variable. Variable: A variable is a factor that can be changed or controlled in an experiment. It is a characteristic that can take on different values or qualities, and it is the part of a study that is manipulated and measured to determine the effect on the dependent variable.
Concepts:
4. Slide
60 seconds
Operational definition is defined as the exact specification of how a particular operation, such as a measurement, is performed. Technical definition is defined as the precise description of the meaning of a term, concept, or physical quantity. In the scientific community, the two definitions are often used interchangeably to describe the same concept.
Did you know?
5. Open question
180 seconds
Work together in pairs: What is the main difference between a technical and an operational definition?
6. Personalised Feedback
360 seconds
Can you explain the difference between a technical definition and an operational definition, and provide an example of each?
7. Open question
180 seconds
Work together in pairs: What is the difference between a technical and operational definition?
8. Drawings
450 seconds
Brain break: Draw a dog and a cat hanging out under the moon with a bowl of ice cream between them.
9. Poll
60 seconds
What is the technical definition of a router?
• A software program used to access the internet
• A networking device that forwards data packets between computer networks
• A device used to display information on a computer screen
10. Poll
60 seconds
What is an operational definition?
• A definition that defines the meaning of a word in common language
• A definition that provides technical specifications for a product
• A definition that explains how something works or operates
11. Poll
60 seconds
What is the technical definition of RAM?
• Central Processing Unit, it processes data and executes instructions.
• Random Access Memory, it stores data temporarily so that it can be quickly accessed by CPU.
• Read-Only Memory, it stores permanent instructions for your computer.
12. Poll
60 seconds
What is an example of an operational definition for 'user-friendly' software?
• Software with minimal documentation and support.
• Software that can be easily navigated without requiring extensive training.
• Software with advanced features and complex functionality.
13. Poll
60 seconds
What is an example of a technical specification for a laptop?
• Screen size and resolution
• Color options
• The number of pre-installed applications
Suggested content
|
__label__pos
| 0.993623 |
• 3,494 Views
Does containers kill VM's?
I have read lot of topics on this subject but most of them are very diplomatic. Just curious if containers are stable and matured on all aspects do we really need tradiotnal VM? why dont just spin containers on bare matels and reduce overhead.
0 Kudos
8 Replies
beelandc
Flight Engineer Flight Engineer
Flight Engineer
• 3,482 Views
There are definitely situations where it makes sense to forego VMs altogether and just deploy containers / container-based infrastructure directly onto bare metal machines.
However, I also think there is still a place for VM virtualization in most IT environments. They provide IT departments with flexibility at a level of abstraction below containers. Furthermore, I would argue that not all applications are good candidates for containerization. I would expect some workloads (particularly legacy applications that are in some sort of maintenance mode) to remain outside of containers.
I just don't see the two technologies as mutually exclusive. You see the same arguments coming up now with FaaS vs Containers. There are certainly situations where one makes sense over the other, but I expect both to co-exist for the foreseeable future.
shauny
Mission Specialist
Mission Specialist
• 3,481 Views
I think traditional VMs will always be around, but there will be a general shift towards container-oriented operating systems (think Atomic or CoreOS).
They're definitely stable and matured, but their use case doesn't suit being an environment for sysadmins or developers to log into and work day-to-day. For example, I have it set up so I can scale up and down my kubernetes environment at the click of a button - it creates a new VM in vCenter, deploys an ISO, then joins it to the cluster. But I can't see things like my jump box ever working as a container.
Similar thing with virtual appliances, something like a virtual firewall or syslog server I feel work best when they're seperate to the environment their supporting. Should the K8s cluster or etc die or have a serious issue, it's taken down not just the workloads, but your VPN with it.
All of the above can be resolved one way or another (for example, I usually put an additional ssl vpn somewhere just incase my usual ipsec tunnel ever goes down), but I don't feel the risk outweights the rewards quite yet.
To answer you're specific question of if we will ever just spin them up on bare metal, no, I don't think we ever will. Containers are just that, containers. They're abstractions upon an existing system (cgroups and namespaces and other fancy pants magic stuff). If we created a system we're it's just running on bare metal, we'd have just created another OS. That's why I focused on container-specific OS earlier on - that and cloud-init are going to be where the real focus is. Minimal traditional OS with container focus.
Just my rambling thoughts. What do you think?
• 3,400 Views
Then you have the resources issue, containers used the resources from the host server whereas VMs have its own resources.
0 Kudos
• 3,390 Views
VM can provide High Avaibility from the HW perpective, COntainer can provide HA on Application Perspective
so VM and Container can support each other
We Learn From Failure, Not Success
kubeadm
Flight Engineer Flight Engineer
Flight Engineer
• 3,352 Views
It won't - just like television didn't kill the radio, etc.
As mentioned by others, VM provides better isolation and is more suitable for certain workloads.
cheers,
Adrian
0 Kudos
• 3,338 Views
I'll go out on a limb here and say Containers absolutely DO NOT "kill virtual machines" because right now one of the best ways to work with Containers is on top of a vm running the base O/S. Whether that's something really tiny or Atomic or your plain-jane RHEL 7.x release. In my business, VM's are our go-to for almost everything. We've got lots of bare metal, but those are usually high-demand, high-bandwidth and high-performance mission critical applications. For "softer" loads, development, testing and compute tasks, we'll choose a VM every time.
Bishop
Cadet
Cadet
• 3,328 Views
Let me take that one step farther.
Look around, and you'll find, due to resource contention, the recommended method of deploying a containerized app is one container per VM -- that's one docker image running on a tiny (photon or alpine) VM. Do that 30 times for 30 images. That's pretty much how I saw VMware managing their container plans; and putting the resource limits on the VM gets us agile little containers with any bloating contained.
But if we can vagrant up a VM build from instructions very like a dockerfile, what's the difference between an app running as an image on an alpine VM vs the app vagranted up inside a RHEL7 slim VM? Patching is more straightforward, as you don't need to worry about versions and dependencies in and out of the docker image, but other than any overhead or lag with the docker shim layer it's about the same thing. Oh, also the filesystem parts. And the port forwarding. And the inter-image dependencies.
So if we should really deploy one image per VM anyway, and if it's really the same thing except a lot simpler management for the slim VM over the docker image on a VM, then really we should just vagrant something up instead of dockering it, and be ready to recycle the slim VM with its payload when we want to refresh (and don't want to use the option of just upgrading the apps).
[Do] containers kill [VMs]? I'm going to say no. Sometimes I worry they barely make traction.
beelandc
Flight Engineer Flight Engineer
Flight Engineer
• 3,324 Views
I agree that if you're going to deploy only one instance of one image per VM, organizations will lose out on many of the potential benefits of containers (including more efficient resource utilization vs just VMs) and the remaining benefits may not (in that specific scenario) make containers worth the additional complexity and overhead.
However, I don't really agree with the notion that the recommended way to use containers is to deploy one container per VM, and while I'm sure there are some organizations out there doing that, I have not personally encountered any using that approach.
If your concern is resource contention, you should implement resource limits on your containers and scale your environment appropriately to optimize the use of the underlying resources without negatively impacting your application performance.
Many enterprise environments leverage container platforms, such as OpenShift and Docker Swarm, to help manage their container-based infrastructure at scale. I'm personally most familiar with OpenShift, but all of these platforms provide features such as container scheduling, replication management, networking support, and resource limits that allow container environments to be managed effectively. However, even standalone Docker has the native ability to implement resource limits for containers.
Join the discussion
You must log in to join this conversation.
|
__label__pos
| 0.535515 |
Free Support Forum - aspose.com
Range is Nothing
Hello,
i try to get named ranges from attached Excel-file.
First range is called ‘BIP_range_simple’.
The area includes B1 to C4.
When i use 'GetRangeByName(“BIP_range_simple”) i get a proper range.
Second is called ‘BIP_range_komplex’.
the area includes A1 to A11 plus C1 to C11.
When i use 'GetRangeByName(“BIP_range_komplex”) the range is nothing.
I use Aspose.Cells Version 4.8.1.3
Hi,
Thanks for providing us the template file.
Since “BIP_range_komplex” is defined Name having non sequenced ranges in it, so, you will not use Range object directly. Please use Name API to get the defined Name first. If you want to get the ranges for “BIP_range_komplex”, you can use Name.GetRanges() to obtain an array of ranges ([]Range) in the defined Name. I have created a sample code for your reference, kindly refer to it, it works fine as I tested.
Sample code:
Workbook wb = new Workbook();
wb.Open(“e:\test\range_test.xls”);
Name name = wb.Worksheets.Names[“BIP_range_komplex”];
MessageBox.Show(name.RefersTo); //fine
Range[] ranges = name.GetRanges();
int frow, fcol;
int rowcount, colcount;
if (ranges != null)
{
for (int i = 0; i < ranges.Length; i++)
{
frow = ranges[i].FirstRow;
fcol = ranges[i].FirstColumn;
string f1 = CellsHelper.CellIndexToName(frow, fcol);
MessageBox.Show(ranges[i].FirstRow + “:” + ranges[i].FirstColumn);
rowcount = ranges[i].RowCount - 1 + ranges[i].FirstRow;
colcount = ranges[i].ColumnCount - 1 + ranges[i].FirstColumn;
string f2 = CellsHelper.CellIndexToName(rowcount, colcount);
MessageBox.Show(f1 + “:” + f2);
}
}
Thank you.
|
__label__pos
| 0.814444 |
jQuery in Action, 3rd edition
The moose likes Programmer Certification (SCJP/OCPJP) and the fly likes STRANGE??????Final variables Big Moose Saloon
Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Certification » Programmer Certification (SCJP/OCPJP)
Bookmark "STRANGE??????Final variables" Watch "STRANGE??????Final variables" New topic
Author
STRANGE??????Final variables
sunilkumar ssuparasmul
Ranch Hand
Joined: Dec 13, 2000
Posts: 142
Look at the following code . here i am able initialize the final variables twice how can this be happen? plz clarify
1)
class VariableTest {
public final int ii;
VariableTest()
{
ii=10;
}
VariableTest(int abc)
{
ii=100;
}
}
2)class VariableTest {
public final int ii;
protected static final int iii;
VariableTest()
{
ii=10;
iii=10;
}
}
when i initilaize iii i get the following two errors
Errror 1)variable 'iii' may not have been initialized. It must be assigned a value in an initializer, or in every constructor.
protected static final int iii;
^
Error 2) Can't assign a second value to a blank final variable: iii
iii=10;
But when i dont intialize iii I get the following error
Blank final variable 'iii' may not have been initialized. It must be assigned a value in an initializer, or in every constructor.
protected static final int iii;
WHY IS IT SO STRANGE . plz moderators and others throwe light
------------------
"Winners don't do different things
They do things differently"
"Winners don't do different things<br /> They do things differently"
Simon Sun
Greenhorn
Joined: Dec 07, 2000
Posts: 20
your first case is ok. I saw it in the book.
Your second case is the "protected" causing the problem.
Try to think protected means that classes in the same package and detrieved classes
can access that class. So when some classes in the same package access the class which
hasn't been initialized, so the blank final variable hasn't been initialized.
Am I say something wrong? I hope some expert can give us a claration as well.
simon
Aatif Kamal
Greenhorn
Joined: Jan 09, 2001
Posts: 2
Answer to you first problem is simple...
If you look at your code carefully you will find that actually you are not initalizing final variables twice. How is that... well you are intializing the the variable ii in two different construtors
1) VariableTest() --- 2) VariableTest(int abc)
Now when you create an instance of the class VariableTest you can use only one constructor at a time so the final variable will be initialzed only once either by constructor one or construtor two.For more clearfication check out the following code
public class test01
{
public static void main(String args[])
{
VariableTest vt1 = new VariableTest();
System.out.println(vt1.ii);
VariableTest vt2 = new VariableTest(10);
System.out.println(vt2.ii);
}
}
class VariableTest
{
public final int ii;
VariableTest()
{
ii=10;
}
VariableTest(int abc)
{
ii=100;
}
}
Answer to your second question is also very simple )
When u declare a variable static final then you must have to intialize it at the same position where it is declared.To rectify the problem use this code.
public static final int iii=10;
Thats why it was giving you the first error. Where as the second error is cuase of the fact that you cannot assign a value to a static final varibale in your code twice.
------------------
AK
<B><I>AK</B></I>
Ajay Patel
Ranch Hand
Joined: Jan 02, 2001
Posts: 39
hi Aatif & Simon,
i think that you guys are a bit confused about static final variables. Static final cannot be initialized in a constructor because a constructor is called on every object creation, but a static final var is to be initilized only once.
But this doesn't mean you have to initialize a static final at declaration. you can always use static initializer blocks.
The following code compiles perfectly:
class VariableTest {
public final int ii;
protected static final int iii;
static
{
iii=10;
}
VariableTest()
{
ii=10;
//iii=10;//error - cant initialize here.
}
}
-Aj
sunilkumar ssuparasmul
Ranch Hand
Joined: Dec 13, 2000
Posts: 142
Hi
Aj patel . as what have u said that static final variables are not initialized in constructor then why the compiler throes
such a error see bold letters.
Blank final variable 'iii' may not have been initialized. It must be assigned a value in an initializer, or in every constructor.
------------------
"Winners don't do different things
They do things differently"
Ajay Patel
Ranch Hand
Joined: Jan 02, 2001
Posts: 39
Hi Sunilkumar,
i compiler your code for q.2 and got only one compiler error:
VariableTest.java:7: cannot assign a value to final variable iii
iii=10;
^
1 error
please recheck.
i dont think that static final variables can be initialized in constructors.
-Aj
jQuery in Action, 3rd edition
subject: STRANGE??????Final variables
jQuery in Action, 3rd edition
|
__label__pos
| 0.737659 |
Main Content
parquetwrite
Write columnar data to Parquet file
Since R2019a
Description
example
parquetwrite(filename,T) writes a table or timetable T to a Parquet 2.0 file with the filename specified in filename.
example
parquetwrite(filename,T,Name,Value) specifies additional options with one or more name-value pair arguments. For example, you can specify "VariableCompression" to change the compression algorithm used, or "Version" to write the data to a Parquet 1.0 file.
Examples
collapse all
Write tabular data into a Parquet file and compare the size of the same tabular data in .csv and .parquet file formats.
Read the tabular data from the file outages.csv into a table.
T = readtable('outages.csv');
Write the data to Parquet file format. By default, the parquetwrite function uses the Snappy compression scheme. To specify other compression schemes see 'VariableCompression' name-value pair.
parquetwrite('outagesDefault.parquet',T)
Get the file sizes and compute the ratio of the size of tabular data in the .csv format to size of the same data in .parquet format.
Get size of .csv file.
fcsv = dir(which('outages.csv'));
size_csv = fcsv.bytes
size_csv = 101040
Get size of .parquet file.
fparquet = dir('outagesDefault.parquet');
size_parquet = fparquet.bytes
size_parquet = 44881
Compute the ratio.
sizeRatio = ( size_parquet/size_csv )*100 ;
disp(['Size Ratio = ', num2str(sizeRatio) '% of original size'])
Size Ratio = 44.419% of original size
Create nested data and write it to a Parquet file.
Create a table with one nested layer of data.
FirstName = ["Akane"; "Omar"; "Maria"];
LastName = ["Saito"; "Ali"; "Silva"];
Names = table(FirstName,LastName);
NumCourse = [5; 3; 6];
Courses = {["Calculus I"; "U.S. History"; "English Literature"; "Studio Art"; "Organic Chemistry II"];
["U.S. History"; "Art History"; "Philosphy"];
["Calculus II"; "Philosphy II"; "Ballet"; "Music Theory"; "Organic Chemistry I"; "English Literature"]};
data = table(Names,NumCourse,Courses)
data=3×3 table
Names NumCourse Courses
FirstName LastName
_____________________ _________ ____________
"Akane" "Saito" 5 {5x1 string}
"Omar" "Ali" 3 {3x1 string}
"Maria" "Silva" 6 {6x1 string}
Write your nested data to a Parquet file.
parquetwrite("StudentCourseLoads.parq",data)
Read the nested Parquet data.
t2 = parquetread("StudentCourseLoads.parq")
t2=3×3 table
Names NumCourse Courses
FirstName LastName
_____________________ _________ ____________
"Akane" "Saito" 5 {5x1 string}
"Omar" "Ali" 3 {3x1 string}
"Maria" "Silva" 6 {6x1 string}
Input Arguments
collapse all
Name of output Parquet file, specified as a character vector or string scalar.
Depending on the location you are writing to, filename can take on one of these forms.
Location
Form
Current folder
To write to the current folder, specify the name of the file in filename.
Example: 'myData.parquet'
Other folders
To write to a folder different from the current folder, specify the full or relative path name in filename.
Example: 'C:\myFolder\myData.parquet'
Example: 'dataDir\myData.parquet'
Remote Location
To write to a remote location, filename must contain the full path of the file specified as a uniform resource locator (URL) of the form:
scheme_name://path_to_file/myData.parquet
Based on the remote location, scheme_name can be one of the values in this table.
Remote Locationscheme_name
Amazon S3™s3
Windows Azure® Blob Storagewasb, wasbs
HDFS™hdfs
For more information, see Work with Remote Data.
Example: 's3://bucketname/path_to_file/myData.parquet'
Data Types: char | string
Input data, specified as a table or timetable.
Use parquetwrite to export structured Parquet data. For more information on Parquet data types supported for writing, see Apache Parquet Data Type Mappings.
Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Example: parquetwrite(filename,T,'VariableCompression','gzip','Version','1.0')
Compression scheme names, specified as one of these values:
• 'snappy', 'brotli', 'gzip', or 'uncompressed'. If you specify one compression algorithm then parquetwrite compresses all variables using the same algorithm.
• Alternatively, you can specify a cell array of character vectors or a string vector containing the names of the compression algorithms to use for each variable.
In general, 'snappy' has better performance for reading and writing, 'gzip' has a higher compression ratio at the cost of more CPU processing time, and 'brotli' typically produces the smallest file size at the cost of compression speed.
Example: parquetwrite('myData.parquet', T, 'VariableCompression', 'brotli')
Example: parquetwrite('myData.parquet', T, 'VariableCompression', {'brotli' 'snappy' 'gzip'})
Encoding scheme names, specified as one of these values:
• 'auto'parquetwrite uses 'plain' encoding for logical variables, and 'dictionary' encoding for all others.
• 'dictionary', 'plain' — If you specify one encoding scheme then parquetwrite encodes all variables with that scheme.
• Alternatively, you can specify a cell array of character vectors or a string vector containing the names of the encoding scheme to use for each variable.
In general, 'dictionary' encoding results in smaller file sizes, but 'plain' encoding can be faster for variables that do not contain many repeated values. If the size of the dictionary or number of unique values grows to be too big, then the encoding automatically reverts to plain encoding. For more information on Parquet encodings, see Parquet encoding definitions.
Example: parquetwrite('myData.parquet', T, 'VariableEncoding', 'plain')
Example: parquetwrite('myData.parquet', T, 'VariableEncoding', {'plain' 'dictionary' 'plain'})
Number of rows to write per output row group, specified as a nonnegative numeric scalar or vector of nonnegative integers.
• If you specify a scalar, the scalar value sets the height of all row groups in the output Parquet file. The last row group may contain fewer rows if there is not an exact multiple.
• If you specify a vector, each value in the vector sets the height of a corresponding row group in the output Parquet file. The sum of all the values in the vector must match the height of the input table.
A row group is the smallest subset of a Parquet file that can be read into memory at once. Reducing the row group height helps the data fit into memory when reading. Row group height also affects the performance of filtering operations on a Parquet data set because a larger row group height can be used to filter larger amounts of data when reading.
If RowGroupHeights is unspecified and the input table exceeds 67108864 rows, the number of row groups in the output file is equal to floor(TotalNumberOfRows/67108864)+1.
Example: RowGroupHeights=100
Example: RowGroupHeights=[300, 400, 500, 0, 268]
Parquet version to use, specified as either '1.0' or '2.0'. By default, '2.0' offers the most efficient storage, but you can select '1.0' for the broadest compatibility with external applications that support the Parquet format.
Caution
Parquet version 1.0 has a limitation that it cannot round-trip variables of type uint32 (they are read back into MATLAB® as int64).
Limitations
In some cases, parquetwrite creates files that do not represent the original array T exactly. If you use parquetread or datastore to read the files, then the result might not have the same format or contents as the original table. For more information, see Apache Parquet Data Type Mappings.
Extended Capabilities
Version History
Introduced in R2019a
expand all
|
__label__pos
| 0.934176 |
Questions tagged [partial-fractions]
Rewriting rational function in the form of partial fractions is often useful when calculating integrals.
Filter by
Sorted by
Tagged with
0 votes
0 answers
39 views
Why does the degree of the numerator have to be 1 less than that of the denominator?
Pretty sure one suggested duplicate will be: Why do we take the degree of numerator 1 degree less than the denominator? Here the only one answer describes why the numerator having the same degree as ...
user avatar
• 101
0 votes
0 answers
45 views
Find particular solution for $(D^2+1)y=e^{a \cos x}$, where a is an arbitary constant.
I tried solving the problem as in the image by taking partial integrals and also by series expansion, but it is becoming more complex to continue and find a close form of it. Please solve it.
user avatar
0 votes
1 answer
53 views
Which integrating technique should I use?
Just some context: In the mathematical course, I have undertaken this year, I've just learnt how to integrate using partial fractions, substitution(not trig though, just a variable) and integrating ...
user avatar
0 votes
2 answers
28 views
Taking the partial fraction when the variable is exponential
I'm trying to solve the following integral: $\int\frac{dx}{2^x+3}$ and here's what I've done so far: Substituting $t=2^x$ we have $dt=2^x\ln 2 dx\implies \frac{dt}{t}=\ln2dx$ Now, I have to take the ...
user avatar
• 329
0 votes
2 answers
80 views
Integrate $2u/(u-u^3)$ [closed]
I'm currently trying to integrate: $$ \int \! \frac{2u}{u-u^3} \, du = \ln \frac{u+1}{u-1} + \ln C $$ I've tried to use partial fractions to simplify the $$ \frac{1}{u-u^3} = \frac{1}{u} - \frac{1}{2 \...
user avatar
• 27
1 vote
2 answers
83 views
Integral of a partial fraction function$\int\frac{1}{(x-1)^3(x-2)^2}dx$
How do we determine the integral $$\int\frac{1}{(x-1)^3(x-2)^2}dx$$ My ideas are the followings: We just split them into partial fractions like $$\frac{1}{(x-1)^3(x-2)^2}=\frac{A}{x-1}+\frac{B}{(x-1)^...
user avatar
• 227
3 votes
1 answer
63 views
Evaluate the sum: $\sum_{n=0}^{\infty}\frac{x^{n+2}}{(n+2)\ n!}$
How to evaluate the below sum? $$\sum_{n=0}^{\infty}\frac{x^{n+2}}{(n+2)\ n!}$$ I was trying to find the integral of $x\ e^x$ without using Integral By Parts. Here's what I got so far: $$\begin{...
user avatar
• 1,784
-1 votes
1 answer
41 views
$\frac{1}{2n}+\frac{1}{n+1}-\frac{3}{2\left(n+2\right)}$ = $\frac{1}{2n}+\frac{3}{2\left(n+1\right)}-\frac{4n+5}{2\left(n+1\right)\left(n+2\right)}$ [closed]
How can one go from $\frac{1}{2n}+\frac{1}{n+1}-\frac{3}{2\left(n+2\right)}$ to $\frac{1}{2n}+\frac{3}{2\left(n+1\right)}-\frac{4n+5}{2\left(n+1\right)\left(n+2\right)}$ I'm really trying for an hour ...
user avatar
0 votes
2 answers
35 views
How to separate the following fraction into partial fractions?
I have that $$\frac{x^2(1-x)}{(1-x)^3} = \frac{x^2}{(1-x)^3} -\frac{x^3}{(1-x)^3} $$ Assuming we begin by cancelling the $(1-x)$ on top with one of those on the bottom, how do we go about splitting $\...
user avatar
0 votes
1 answer
49 views
Is there a symbol for comparing coefficients?
While doing PFs, ODEs or many other things, comparing coefficients often come up. Is it wrong to use the '=' for comparing coefficients, e.g. '$4x^2$ + 3x = Ax ∴ A=3'. Or is there a correct symbol to ...
user avatar
• 9
1 vote
2 answers
46 views
prove a partial fraction expansion formula
Let $f(x) = (x-a_1)(x-a_2)\cdots ( x-a_n), n\ge 1,$ where the $a_i$'s are distinct real numbers. For $k=0,1,\cdots, n-1$, prove that the partial fraction expansion of $\frac{x^k}{f(x)}$ is $\dfrac{x^k}...
user avatar
• 881
0 votes
2 answers
94 views
Solution to $\int\frac{\ln(1+x^2)}{x^2}dx$
Q: $\int\frac{\ln(1+x^2)}{x^2}dx$ Here is my entire working: So, overall, I started with the reverse product rule, then onto reverse chain rule and then tried to partial fraction, however, I still ...
user avatar
2 votes
4 answers
77 views
Finding the partial fractions decomposition of $\frac{9}{(1+2x)(2-x)^2} $
So this is basically my textbook work for my class, where we are practicing algebra with partial fractions. I understand the basics of decomposition, but I do not understand how to do it when then the ...
user avatar
3 votes
2 answers
60 views
Find minimum of the function using AM-GM
Problem: Find the minimum of the function $f(x,y)=x + \frac{8}{y(x-y)}$, where $x>y>0$ using AM-GM. My attempt: $$f(x,y)=2\cdot \frac{x+\frac{8}{y(x-y)}}{2} \ge 2 \sqrt{\frac{8x}{y(x-y)}}$$ But ...
user avatar
2 votes
1 answer
51 views
When should I use partial fractions in generating functions
I am currently studying generating functions and I don't understand why should I use partial fraction decomposition when solving $x^n$ coefficient of a question. For example, in this function $$\frac{...
user avatar
3 votes
5 answers
85 views
How to decompose $\frac{1}{(1 + x)(1 - x)^2}$ into partial fractions
Good Day. I was trying to decompose $$\frac{1}{(1 + x)(1 - x)^2}$$ into partial fractions. $$\frac{1}{(1 + x)(1 - x)^2} = \frac{A}{1 + x} + \frac{B}{(1 - x)^2}$$ $$1 = A(1 - x)^ 2 + B(1 + x)$$ ...
user avatar
0 votes
1 answer
37 views
Partial fraction with complex roots
Is it so that partial fractions with complex roots can work sometime, and sometime not? I have tried to check a result by WA here, and tried to solve it manually: \begin{equation} X(z)=\frac{104z+30}{...
user avatar
0 votes
2 answers
43 views
A problematic partial fraction decomposition $ X(z)=\frac{z}{(z-3)(z^2+4z+5)}$ [closed]
I try to solve this partial fraction: \begin{equation} X(z)=\frac{z}{(z-3)(z^2+4z+5)} \end{equation} and use the following form \begin{equation} X(z)=\frac{A}{(z-3)}+\frac{Bz+C}{(z^2+4z+5)} \end{...
user avatar
0 votes
1 answer
84 views
How to Solve This Integral With Multiple Variables? I think I should use partial fraction decomposition.
The problem is the integral of $$\int {\frac{-8 x}{x^4-a^4}}\, dx$$ I factored out the -8 and divided the x. I tried to use partial fraction decomposition but it wasn't forming into something I could ...
user avatar
0 votes
0 answers
32 views
Integral Using Partial Fraction Decomposition
So I have the integral of (4x^2+2x-1)/(x^3+x^2), and I have to solve it using partial fraction decomposition. The only thing is, the way I set it up, I need another factor for x to make it equal one ...
user avatar
-1 votes
1 answer
74 views
Finding partial fractions of $\frac{z^3+2z^2-2z}{(z-2)(z^2+2}$ and/or using Cauchy's formula to solve
I am trying to find the inverse z-transform of \begin{equation} x(z)=\frac{z^3+2z^2-2z}{(z-2)(z^2+2)} \end{equation} and for this we need to get partial fractions. I have tried multiple approaches, ...
user avatar
2 votes
2 answers
50 views
Partial fractions and residue theorem
I need to find the inverse Laplace transform of this function: $$F(s) = \frac{50(s+1)}{s(s^2+20s+116)(0.8s+1)} $$ $$ \frac{50(s+1)}{s(s^2+20s+116)(0.8s+1)} = \frac{K_1}{s} + \frac{K_2}{0.8s+1} + \frac{...
user avatar
• 129
1 vote
1 answer
54 views
Integrating $\int \frac{1}{x\sqrt{3-x^2}}dx$ without trig sub
So I am evaluating $\int \frac{1}{x\sqrt{3-x^2}}dx$ without using trig sub integrals. So far I have $$u=\sqrt{3-x^2}, x^2=3-u^2,du=-\frac{x}{\sqrt{3-x^2}}dx, dx = -\frac{\sqrt{3-x^2}}{x}$$ So ...
user avatar
1 vote
0 answers
24 views
Integration by Partial Fraction Decomposition, Given Arbitrary Constants
Given Newton's 2nd law equation, I'm supposed to find $v$. My second law equation is: $m\dot{v}=-bv-vc^2$ By separation of variables, I arrive at $$\frac{dv}{bv+cv^2}=\frac{-1}{m}dt$$ $$\int_{v_0}^{v}...
user avatar
0 votes
2 answers
49 views
Question about specific step in proving Schur's Theorem (Combinatorics)
I refer to p.98 of generatingfunctionology in proving Schur's Theorem: The partial fraction expansion of $\mathcal{H}(x)$ is of the form \begin{align*} \mathcal{H}(x) &= \frac{1}{(1-x^{a_1})(1-x^{...
user avatar
• 1,049
0 votes
1 answer
74 views
How do we calculate the inverse Laplace transform of $F(s)=\frac{s^2+1}{(s+1)(s-1)}$?
We have \begin{equation} F(s)=\frac{s^2+1}{(s+1)(s-1)} \end{equation} which I want to use Heavisde method to find the fractions. We start \begin{equation} F(s)=\frac{s^2+1}{(s+1)(s-1)}=\frac{A}{(s+1)}+...
user avatar
2 votes
1 answer
65 views
Two partial fraction approaches, one is wrong, the other is right, why?
I want to do a partial fraction on \begin{equation} \frac{z}{(z-4)(z+\frac{1}{2})} \end{equation} Method one, which apparently is wrong: \begin{equation} \frac{z}{(z-4)(z+\frac{1}{2})}=\frac{A}{z-4}+\...
user avatar
1 vote
1 answer
64 views
Partial fraction decomposition involving imaginary numbers and two variables
I am trying to find a partial fraction decomposition for the following: $$\frac{1}{(-\alpha xi+4y)(\alpha xi + 2y)}$$ where $\alpha\in \mathbb{R}$. I am understanding that I could write this ...
user avatar
0 votes
4 answers
138 views
How to expand $\frac{1}{n^a(n+k)^b}$ using partial fraction decomposition?
Is it possible to decompose $\displaystyle\frac{1}{n^a(n+k)^b}$ into finite summation? where $a,b,n,k\in Z^{+}$ and $a+b$ is odd. What I tried is converting the fraction to double integral: $\...
user avatar
• 21.6k
7 votes
3 answers
280 views
If $\int_{0}^{\infty}\frac{dx}{1+x^2+x^4}=\frac{\pi \sqrt{n}}{2n}$, then $n=$
If $\int_{0}^{\infty}\frac{dx}{1+x^2+x^4}=\frac{\pi \sqrt{n}}{2n}$, then $n=$ $\text{A) }1 \space \space \space \space \space\text{B) }2 \space \space \space \space \space\text{C) }3 \space \space \...
user avatar
2 votes
1 answer
88 views
Path to proving partial fractions and the fundamental theorem of algebra
As I've learned Calculus, I've tried to follow along with proofs of the rules that I use. In most cases, like say the Power Rule, I'm able to follow along with the proofs using concepts I understand, ...
user avatar
• 853
0 votes
1 answer
26 views
Solving this system for Partial Fraction Decomposition
I have a rational fraction $\frac{P(x)}{Q(x)}$ and would transform it into a sum of separate fractions. I know that $\{a_n\}$ is the set of the roots of $Q(x)$ which is of grade $t$, so it has exactly ...
user avatar
1 vote
2 answers
83 views
Evaluation of integral from textbook
Integral in question $$ \int\frac{dx}{\sqrt{\cos(x)}\sin(x)}$$ (If it helps, the original question in my textbook is to find the definite integral corresponding to this antiderivative with the limits ...
user avatar
0 votes
1 answer
81 views
Partial Fractions with Two Repeated Linear Terms
Rewrite the expression below into partial fractions $$\frac{\omega s}{(s+\omega)^2(s-\omega)^2}$$ I started by taking the general form $$\frac{A}{(s+\omega)} + \frac{B}{(s+\omega)^2} + \frac{C}{(s-\...
user avatar
• 37
0 votes
3 answers
139 views
Partial Fractions of $\frac{1}{x^6+1}$
I am trying to solve the following integral : $$\int \frac{1}{1+x^6}dx$$ I do not want the reader to evaluate the integral, but rather the partial fractions of the integrand : $\frac{1}{1+x^6}$. This ...
user avatar
0 votes
0 answers
46 views
Partial fraction of a function with two variables
I am trying to decompose a function within an optimization problem. I see that Maple and similar software products can do it for a single variable function but not for multi-variate ones. Now, I am ...
user avatar
0 votes
1 answer
38 views
How do I express this type of equation as partial fraction?
I got this equation by doing a Laplace transformation. Now, I want to find out the inverse Laplace and for that first I need to decompose this equation but I'm bit confused about how to express this ...
user avatar
0 votes
3 answers
80 views
A question on partial fractions
I have the given function which I must convert to partial fractions: \begin{equation} \frac{x^2}{(x^2+1)(x^2+9)} \end{equation} and I thought that I should prepare this as: \begin{equation} \frac{A}{(...
user avatar
0 votes
1 answer
27 views
Partial fraction decomposition trouble with a problem
I have this integral: $$ \int \frac{1}{(1+x^2)(1+(z-x)^2)} {\rm d}x $$ and I want to perform partial fraction decomposition in this form $$ \int \left( \frac{Ax + B}{1+x^2} + \frac{Cx + D }{1+(z-x)^2}...
user avatar
2 votes
0 answers
73 views
Are there applications of partial fraction decomposition ( of a rational function) outside integration problems?
I've been recently acquainted with a well known technique called " partial fraction decomposition" which allows, for example to express $\frac {x} {x^2-1}$ as $\frac {1}{2(x+1)} + \frac {1} ...
user avatar
0 votes
1 answer
29 views
How can I solve this primitive function?
The primitive function I'm trying to solve. $\int_\frac{1}{x^4-1}\;dx$ I've used partial fraction decomposition method. The following is the equation I'm setting up to be able to solve A, B and C:$(1/...
user avatar
2 votes
1 answer
128 views
To determine the integration of $ \int_{0}^{+\infty} \exp\!\Big(-\Big(\frac{ax^2+bx+c}{gx+h}\Big)\Big) dx$.
What is the integration of the following function: $$ \int_\nolimits{0}^{+\infty} \exp\!\bigg(-\bigg(\frac{ax^2+bx+c}{gx+h}\bigg) \bigg)dx.$$ What I have done is as follows: Here, $\kappa=c-\Big(\...
user avatar
0 votes
2 answers
69 views
Why are these two integrals different even though they should be equal?
$\int\frac{x^2}{x^2-4}dx$ and $\int\frac{x^2-4}{x^2-4}dx+\int\frac{4}{x^2-4}dx$ The first one is $\ln |x-2|-\ln|x+2|$ and the second one is $x+\frac{1}{4}\ln |x-2|-\frac{1}{4}\ln|x+2|$. Shouldn't they ...
user avatar
0 votes
1 answer
78 views
Partial Fraction Decomposition (Complex Numbers)
I'm going insane with this question from a previous exam: How do I get the partial fraction decomposition of: $${15 \over (z-3i)(2z-3)}$$ I don't understand how to 'equate' anything here. If we have ...
user avatar
1 vote
0 answers
21 views
How was this partial fraction solved with 2 variables?
How was this partial fraction decomposition done? partial fraction image
user avatar
1 vote
1 answer
66 views
Partial fraction decomposition done with square root. How is it possible?
I just stumbled on this example: $$\lim _{n\to \infty }\frac{\frac{\sqrt{n^3+n}}{n^4-n^2}}{\frac{n^{\frac{3}{2}}}{n^4}}=\lim _{n\to \infty }\frac{\sqrt{1+\frac{1}{n^2}}}{1-\frac{1}{n^2}}$$ And can't ...
user avatar
• 213
0 votes
2 answers
26 views
How to know if partial fractions have been done incorrectly?
Say you start with a set of fractions already broken up: $$ 2 + \frac{3}{x-1} + \frac{1}{x-3} $$ These can be combined into a single fraction by cross multiplying them: $$ \frac{2(x-1)(x-3) + 3(x-3) + ...
user avatar
2 votes
2 answers
87 views
Strange/Unexpected behavior of an Infinite product
Some friends and I were playing around with this continued fraction: We noticed when writing it out for each next step, the end behavior went either to 1 (when there was an even number of terms) or ...
user avatar
-1 votes
2 answers
55 views
Calculate partial fractions
calculate partial fractions for: $1/x^2(x^2 + 1)$ I have tried solving by expanding it like this: $A/x^2 + B/ (x^2 + 1)$ and it results in the right answer as given in class. But partial fractions ...
user avatar
0 votes
1 answer
104 views
Skepticism concerning Heaviside's "Cover-up Method" for $\textbf{partial fraction decomposition}$
I was reading this paper from MIT and it introduces Heaviside’s Cover-up Method for partial fraction decomposition. In that paper in Example $1$ it solves a problem using that method and just when ...
user avatar
1
2 3 4 5
24
|
__label__pos
| 0.999891 |
t4018-diff-funcname: demonstrate end of line funcname matching flaw
[git/dscho.git] / git.c
blob5582c515ac04609a338de1d2d5e510e7e7c4914d
1 #include "builtin.h"
2 #include "exec_cmd.h"
3 #include "cache.h"
4 #include "quote.h"
6 const char git_usage_string[] =
7 "git [--version] [--exec-path[=GIT_EXEC_PATH]] [-p|--paginate|--no-pager] [--bare] [--git-dir=GIT_DIR] [--work-tree=GIT_WORK_TREE] [--help] COMMAND [ARGS]";
9 const char git_more_info_string[] =
10 "See 'git help COMMAND' for more information on a specific command.";
12 static int use_pager = -1;
13 struct pager_config {
14 const char *cmd;
15 int val;
18 static int pager_command_config(const char *var, const char *value, void *data)
20 struct pager_config *c = data;
21 if (!prefixcmp(var, "pager.") && !strcmp(var + 6, c->cmd))
22 c->val = git_config_bool(var, value);
23 return 0;
26 /* returns 0 for "no pager", 1 for "use pager", and -1 for "not specified" */
27 int check_pager_config(const char *cmd)
29 struct pager_config c;
30 c.cmd = cmd;
31 c.val = -1;
32 git_config(pager_command_config, &c);
33 return c.val;
36 static void commit_pager_choice(void) {
37 switch (use_pager) {
38 case 0:
39 setenv("GIT_PAGER", "cat", 1);
40 break;
41 case 1:
42 setup_pager();
43 break;
44 default:
45 break;
49 static int handle_options(const char*** argv, int* argc, int* envchanged)
51 int handled = 0;
53 while (*argc > 0) {
54 const char *cmd = (*argv)[0];
55 if (cmd[0] != '-')
56 break;
59 * For legacy reasons, the "version" and "help"
60 * commands can be written with "--" prepended
61 * to make them look like flags.
63 if (!strcmp(cmd, "--help") || !strcmp(cmd, "--version"))
64 break;
67 * Check remaining flags.
69 if (!prefixcmp(cmd, "--exec-path")) {
70 cmd += 11;
71 if (*cmd == '=')
72 git_set_argv_exec_path(cmd + 1);
73 else {
74 puts(git_exec_path());
75 exit(0);
77 } else if (!strcmp(cmd, "-p") || !strcmp(cmd, "--paginate")) {
78 use_pager = 1;
79 } else if (!strcmp(cmd, "--no-pager")) {
80 use_pager = 0;
81 if (envchanged)
82 *envchanged = 1;
83 } else if (!strcmp(cmd, "--git-dir")) {
84 if (*argc < 2) {
85 fprintf(stderr, "No directory given for --git-dir.\n" );
86 usage(git_usage_string);
88 setenv(GIT_DIR_ENVIRONMENT, (*argv)[1], 1);
89 if (envchanged)
90 *envchanged = 1;
91 (*argv)++;
92 (*argc)--;
93 handled++;
94 } else if (!prefixcmp(cmd, "--git-dir=")) {
95 setenv(GIT_DIR_ENVIRONMENT, cmd + 10, 1);
96 if (envchanged)
97 *envchanged = 1;
98 } else if (!strcmp(cmd, "--work-tree")) {
99 if (*argc < 2) {
100 fprintf(stderr, "No directory given for --work-tree.\n" );
101 usage(git_usage_string);
103 setenv(GIT_WORK_TREE_ENVIRONMENT, (*argv)[1], 1);
104 if (envchanged)
105 *envchanged = 1;
106 (*argv)++;
107 (*argc)--;
108 } else if (!prefixcmp(cmd, "--work-tree=")) {
109 setenv(GIT_WORK_TREE_ENVIRONMENT, cmd + 12, 1);
110 if (envchanged)
111 *envchanged = 1;
112 } else if (!strcmp(cmd, "--bare")) {
113 static char git_dir[PATH_MAX+1];
114 is_bare_repository_cfg = 1;
115 setenv(GIT_DIR_ENVIRONMENT, getcwd(git_dir, sizeof(git_dir)), 0);
116 if (envchanged)
117 *envchanged = 1;
118 } else {
119 fprintf(stderr, "Unknown option: %s\n", cmd);
120 usage(git_usage_string);
123 (*argv)++;
124 (*argc)--;
125 handled++;
127 return handled;
130 static int handle_alias(int *argcp, const char ***argv)
132 int envchanged = 0, ret = 0, saved_errno = errno;
133 const char *subdir;
134 int count, option_count;
135 const char** new_argv;
136 const char *alias_command;
137 char *alias_string;
138 int unused_nongit;
140 subdir = setup_git_directory_gently(&unused_nongit);
142 alias_command = (*argv)[0];
143 alias_string = alias_lookup(alias_command);
144 if (alias_string) {
145 if (alias_string[0] == '!') {
146 if (*argcp > 1) {
147 struct strbuf buf;
149 strbuf_init(&buf, PATH_MAX);
150 strbuf_addstr(&buf, alias_string);
151 sq_quote_argv(&buf, (*argv) + 1, PATH_MAX);
152 free(alias_string);
153 alias_string = buf.buf;
155 trace_printf("trace: alias to shell cmd: %s => %s\n",
156 alias_command, alias_string + 1);
157 ret = system(alias_string + 1);
158 if (ret >= 0 && WIFEXITED(ret) &&
159 WEXITSTATUS(ret) != 127)
160 exit(WEXITSTATUS(ret));
161 die("Failed to run '%s' when expanding alias '%s'\n",
162 alias_string + 1, alias_command);
164 count = split_cmdline(alias_string, &new_argv);
165 if (count < 0)
166 die("Bad alias.%s string", alias_command);
167 option_count = handle_options(&new_argv, &count, &envchanged);
168 if (envchanged)
169 die("alias '%s' changes environment variables\n"
170 "You can use '!git' in the alias to do this.",
171 alias_command);
172 memmove(new_argv - option_count, new_argv,
173 count * sizeof(char *));
174 new_argv -= option_count;
176 if (count < 1)
177 die("empty alias for %s", alias_command);
179 if (!strcmp(alias_command, new_argv[0]))
180 die("recursive alias: %s", alias_command);
182 trace_argv_printf(new_argv,
183 "trace: alias expansion: %s =>",
184 alias_command);
186 new_argv = xrealloc(new_argv, sizeof(char*) *
187 (count + *argcp + 1));
188 /* insert after command name */
189 memcpy(new_argv + count, *argv + 1, sizeof(char*) * *argcp);
190 new_argv[count+*argcp] = NULL;
192 *argv = new_argv;
193 *argcp += count - 1;
195 ret = 1;
198 if (subdir)
199 chdir(subdir);
201 errno = saved_errno;
203 return ret;
206 const char git_version_string[] = GIT_VERSION;
208 #define RUN_SETUP (1<<0)
209 #define USE_PAGER (1<<1)
211 * require working tree to be present -- anything uses this needs
212 * RUN_SETUP for reading from the configuration file.
214 #define NEED_WORK_TREE (1<<2)
216 struct cmd_struct {
217 const char *cmd;
218 int (*fn)(int, const char **, const char *);
219 int option;
222 static int run_command(struct cmd_struct *p, int argc, const char **argv)
224 int status;
225 struct stat st;
226 const char *prefix;
228 prefix = NULL;
229 if (p->option & RUN_SETUP)
230 prefix = setup_git_directory();
232 if (use_pager == -1 && p->option & RUN_SETUP)
233 use_pager = check_pager_config(p->cmd);
234 if (use_pager == -1 && p->option & USE_PAGER)
235 use_pager = 1;
236 commit_pager_choice();
238 if (p->option & NEED_WORK_TREE)
239 setup_work_tree();
241 trace_argv_printf(argv, "trace: built-in: git");
243 status = p->fn(argc, argv, prefix);
244 if (status)
245 return status & 0xff;
247 /* Somebody closed stdout? */
248 if (fstat(fileno(stdout), &st))
249 return 0;
250 /* Ignore write errors for pipes and sockets.. */
251 if (S_ISFIFO(st.st_mode) || S_ISSOCK(st.st_mode))
252 return 0;
254 /* Check for ENOSPC and EIO errors.. */
255 if (fflush(stdout))
256 die("write failure on standard output: %s", strerror(errno));
257 if (ferror(stdout))
258 die("unknown write failure on standard output");
259 if (fclose(stdout))
260 die("close failed on standard output: %s", strerror(errno));
261 return 0;
264 static void handle_internal_command(int argc, const char **argv)
266 const char *cmd = argv[0];
267 static struct cmd_struct commands[] = {
268 { "add", cmd_add, RUN_SETUP | NEED_WORK_TREE },
269 { "annotate", cmd_annotate, RUN_SETUP },
270 { "apply", cmd_apply },
271 { "archive", cmd_archive },
272 { "blame", cmd_blame, RUN_SETUP },
273 { "branch", cmd_branch, RUN_SETUP },
274 { "bundle", cmd_bundle },
275 { "cat-file", cmd_cat_file, RUN_SETUP },
276 { "checkout", cmd_checkout, RUN_SETUP | NEED_WORK_TREE },
277 { "checkout-index", cmd_checkout_index,
278 RUN_SETUP | NEED_WORK_TREE},
279 { "check-ref-format", cmd_check_ref_format },
280 { "check-attr", cmd_check_attr, RUN_SETUP },
281 { "cherry", cmd_cherry, RUN_SETUP },
282 { "cherry-pick", cmd_cherry_pick, RUN_SETUP | NEED_WORK_TREE },
283 { "clone", cmd_clone },
284 { "clean", cmd_clean, RUN_SETUP | NEED_WORK_TREE },
285 { "commit", cmd_commit, RUN_SETUP | NEED_WORK_TREE },
286 { "commit-tree", cmd_commit_tree, RUN_SETUP },
287 { "config", cmd_config },
288 { "count-objects", cmd_count_objects, RUN_SETUP },
289 { "describe", cmd_describe, RUN_SETUP },
290 { "diff", cmd_diff },
291 { "diff-files", cmd_diff_files, RUN_SETUP | NEED_WORK_TREE },
292 { "diff-index", cmd_diff_index, RUN_SETUP },
293 { "diff-tree", cmd_diff_tree, RUN_SETUP },
294 { "fast-export", cmd_fast_export, RUN_SETUP },
295 { "fetch", cmd_fetch, RUN_SETUP },
296 { "fetch-pack", cmd_fetch_pack, RUN_SETUP },
297 { "fetch--tool", cmd_fetch__tool, RUN_SETUP },
298 { "fmt-merge-msg", cmd_fmt_merge_msg, RUN_SETUP },
299 { "for-each-ref", cmd_for_each_ref, RUN_SETUP },
300 { "format-patch", cmd_format_patch, RUN_SETUP },
301 { "fsck", cmd_fsck, RUN_SETUP },
302 { "fsck-objects", cmd_fsck, RUN_SETUP },
303 { "gc", cmd_gc, RUN_SETUP },
304 { "get-tar-commit-id", cmd_get_tar_commit_id },
305 { "grep", cmd_grep, RUN_SETUP | USE_PAGER },
306 { "help", cmd_help },
307 #ifndef NO_CURL
308 { "http-fetch", cmd_http_fetch, RUN_SETUP },
309 #endif
310 { "init", cmd_init_db },
311 { "init-db", cmd_init_db },
312 { "log", cmd_log, RUN_SETUP | USE_PAGER },
313 { "ls-files", cmd_ls_files, RUN_SETUP },
314 { "ls-tree", cmd_ls_tree, RUN_SETUP },
315 { "ls-remote", cmd_ls_remote },
316 { "mailinfo", cmd_mailinfo },
317 { "mailsplit", cmd_mailsplit },
318 { "merge", cmd_merge, RUN_SETUP | NEED_WORK_TREE },
319 { "merge-base", cmd_merge_base, RUN_SETUP },
320 { "merge-file", cmd_merge_file },
321 { "merge-ours", cmd_merge_ours, RUN_SETUP },
322 { "merge-recursive", cmd_merge_recursive, RUN_SETUP | NEED_WORK_TREE },
323 { "merge-subtree", cmd_merge_recursive, RUN_SETUP | NEED_WORK_TREE },
324 { "mv", cmd_mv, RUN_SETUP | NEED_WORK_TREE },
325 { "name-rev", cmd_name_rev, RUN_SETUP },
326 { "pack-objects", cmd_pack_objects, RUN_SETUP },
327 { "peek-remote", cmd_ls_remote },
328 { "pickaxe", cmd_blame, RUN_SETUP },
329 { "prune", cmd_prune, RUN_SETUP },
330 { "prune-packed", cmd_prune_packed, RUN_SETUP },
331 { "push", cmd_push, RUN_SETUP },
332 { "read-tree", cmd_read_tree, RUN_SETUP },
333 { "reflog", cmd_reflog, RUN_SETUP },
334 { "remote", cmd_remote, RUN_SETUP },
335 { "repo-config", cmd_config },
336 { "rerere", cmd_rerere, RUN_SETUP },
337 { "reset", cmd_reset, RUN_SETUP },
338 { "rev-list", cmd_rev_list, RUN_SETUP },
339 { "rev-parse", cmd_rev_parse },
340 { "revert", cmd_revert, RUN_SETUP | NEED_WORK_TREE },
341 { "rm", cmd_rm, RUN_SETUP },
342 { "send-pack", cmd_send_pack, RUN_SETUP },
343 { "shortlog", cmd_shortlog, USE_PAGER },
344 { "show-branch", cmd_show_branch, RUN_SETUP },
345 { "show", cmd_show, RUN_SETUP | USE_PAGER },
346 { "status", cmd_status, RUN_SETUP | NEED_WORK_TREE },
347 { "stripspace", cmd_stripspace },
348 { "symbolic-ref", cmd_symbolic_ref, RUN_SETUP },
349 { "tag", cmd_tag, RUN_SETUP },
350 { "tar-tree", cmd_tar_tree },
351 { "unpack-objects", cmd_unpack_objects, RUN_SETUP },
352 { "update-index", cmd_update_index, RUN_SETUP },
353 { "update-ref", cmd_update_ref, RUN_SETUP },
354 { "upload-archive", cmd_upload_archive },
355 { "verify-tag", cmd_verify_tag, RUN_SETUP },
356 { "version", cmd_version },
357 { "whatchanged", cmd_whatchanged, RUN_SETUP | USE_PAGER },
358 { "write-tree", cmd_write_tree, RUN_SETUP },
359 { "verify-pack", cmd_verify_pack },
360 { "show-ref", cmd_show_ref, RUN_SETUP },
361 { "pack-refs", cmd_pack_refs, RUN_SETUP },
363 int i;
364 static const char ext[] = STRIP_EXTENSION;
366 if (sizeof(ext) > 1) {
367 i = strlen(argv[0]) - strlen(ext);
368 if (i > 0 && !strcmp(argv[0] + i, ext)) {
369 char *argv0 = strdup(argv[0]);
370 argv[0] = cmd = argv0;
371 argv0[i] = '\0';
375 /* Turn "git cmd --help" into "git help cmd" */
376 if (argc > 1 && !strcmp(argv[1], "--help")) {
377 argv[1] = argv[0];
378 argv[0] = cmd = "help";
381 for (i = 0; i < ARRAY_SIZE(commands); i++) {
382 struct cmd_struct *p = commands+i;
383 if (strcmp(p->cmd, cmd))
384 continue;
385 exit(run_command(p, argc, argv));
389 static void execv_dashed_external(const char **argv)
391 struct strbuf cmd;
392 const char *tmp;
394 strbuf_init(&cmd, 0);
395 strbuf_addf(&cmd, "git-%s", argv[0]);
398 * argv[0] must be the git command, but the argv array
399 * belongs to the caller, and may be reused in
400 * subsequent loop iterations. Save argv[0] and
401 * restore it on error.
403 tmp = argv[0];
404 argv[0] = cmd.buf;
406 trace_argv_printf(argv, "trace: exec:");
408 /* execvp() can only ever return if it fails */
409 execvp(cmd.buf, (char **)argv);
411 trace_printf("trace: exec failed: %s\n", strerror(errno));
413 argv[0] = tmp;
415 strbuf_release(&cmd);
419 int main(int argc, const char **argv)
421 const char *cmd = argv[0] && *argv[0] ? argv[0] : "git-help";
422 char *slash = (char *)cmd + strlen(cmd);
423 int done_alias = 0;
426 * Take the basename of argv[0] as the command
427 * name, and the dirname as the default exec_path
428 * if we don't have anything better.
431 --slash;
432 while (cmd <= slash && !is_dir_sep(*slash));
433 if (cmd <= slash) {
434 *slash++ = 0;
435 git_set_argv0_path(cmd);
436 cmd = slash;
440 * "git-xxxx" is the same as "git xxxx", but we obviously:
442 * - cannot take flags in between the "git" and the "xxxx".
443 * - cannot execute it externally (since it would just do
444 * the same thing over again)
446 * So we just directly call the internal command handler, and
447 * die if that one cannot handle it.
449 if (!prefixcmp(cmd, "git-")) {
450 cmd += 4;
451 argv[0] = cmd;
452 handle_internal_command(argc, argv);
453 die("cannot handle %s internally", cmd);
456 /* Look for flags.. */
457 argv++;
458 argc--;
459 handle_options(&argv, &argc, NULL);
460 commit_pager_choice();
461 if (argc > 0) {
462 if (!prefixcmp(argv[0], "--"))
463 argv[0] += 2;
464 } else {
465 /* The user didn't specify a command; give them help */
466 printf("usage: %s\n\n", git_usage_string);
467 list_common_cmds_help();
468 printf("\n%s\n", git_more_info_string);
469 exit(1);
471 cmd = argv[0];
474 * We use PATH to find git commands, but we prepend some higher
475 * precidence paths: the "--exec-path" option, the GIT_EXEC_PATH
476 * environment, and the $(gitexecdir) from the Makefile at build
477 * time.
479 setup_path();
481 while (1) {
482 /* See if it's an internal command */
483 handle_internal_command(argc, argv);
485 /* .. then try the external ones */
486 execv_dashed_external(argv);
488 /* It could be an alias -- this works around the insanity
489 * of overriding "git log" with "git show" by having
490 * alias.log = show
492 if (done_alias || !handle_alias(&argc, &argv))
493 break;
494 done_alias = 1;
497 if (errno == ENOENT) {
498 if (done_alias) {
499 fprintf(stderr, "Expansion of alias '%s' failed; "
500 "'%s' is not a git-command\n",
501 cmd, argv[0]);
502 exit(1);
504 help_unknown_cmd(cmd);
507 fprintf(stderr, "Failed to run command '%s': %s\n",
508 cmd, strerror(errno));
510 return 1;
|
__label__pos
| 0.998549 |
2017년 1월 22일 일요일
MoleMash for App Inventor 2
In the game MoleMash, a mole pops up at random positions on a playing field, and the player scores points by hitting the mole before it jumps away. This tutorial shows how to build MoleMash as an example of a simple game that uses animation.
Getting Started
Connect to the App Inventor web site and start a new project. Name it "MoleMash", and also set the screen's Title to "MoleMash". Open the Blocks Editor and connect to the phone.
Also download this picture of a mole and save it on your computer.
Introduction
You'll design the game so that the mole moves once every half-second. If it is touched, the score increases by one, and the phone vibrates. Pressing restart resets the score to zero.
This tutorial introduces:
⦁ image sprites
⦁ timers and the Clock component
⦁ procedures
⦁ picking random numbers between 0 and 1
⦁ text blocks
⦁ typeblocking
The first components
Several components should be familiar from previous tutorials:
⦁ Canvas named "MyCanvas". This is the area where the mole moves.
⦁ Label named "ScoreLabel" that shows the score, i.e., the number of times the player has hit the mole.
⦁ Button named "ResetButton".
Drag these components from the Palette onto the Viewer and assign their names. Put MyCanvas on top and set its dimensions to 300 pixels wide by 300 pixels high. Set the Text of ScoreLabel to "Score: ---". Set the Text of ResetButton to "Reset". Also add a Sound component and name it "Noise". You'll use Noise to make the phone vibrate when the mole is hit, similar to the way you made the kitty purr in HelloPurr.
Timers and the Clock component
You need to arrange for the mole to jump periodically, and you'll do this with the aid of a Clock component. The Clock component provides various operations dealing with time, like telling you what the date is. Here, you'll use the component as a timer that fires at regular internals. The firing interval is determined by the Clock 's TimerInterval property. Drag out a Clock component; it will go into the non-visible components area. Name it "MoleTimer". Set its TimeInterval to 500 milliseconds to make the mole move every half second. Make sure that TimerEnabled is checked.
Adding an Image Sprite
To add the moving mole we'll use a sprite.
Sprites are images that can move on the screen within a Canvas. Each sprite has a Speed and a Heading, and also an Interval that determines how often the sprite moves at its designated speed. Sprites can also detect when they are touched. In MoleMash, the mole has a speed zero, so it won't move by itself. Instead, you'll be setting the mole's position each time the timer fires. Drag an ImageSprite component onto the Viewer. You'll find this component in the Drawing and Animation category of the Palette. Place it within MyCanvas area. Set these properties for the Mole sprite:
⦁ Picture: Use mole.png, which you downloaded to your computer at the beginning of this tutorial.
⦁ Enabled: checked
⦁ Interval: 500 (The interval doesn't matter here, because the mole's speed is zero: it's not moving by itself.)
⦁ Heading: 0 The heading doesn't matter here either, because the speed is 0.
⦁ Speed: 0.0
⦁ Visible: checked
⦁ Width: Automatic
⦁ Height: Automatic
You should see the x and y properties already filled in. They were determined by where you placed the mole when you dragged it onto MyCanvas. Go ahead and drag the mole some more. You should see x and y change. You should also see the mole on your connected phone, and the mole moving around on the phone as you drag it around in the Designer. You've now specified all the components. The Designer should look like this. Notice how Mole is indented under MyCanvas in the component structure list, indicating that the sprite is a sub-component of the canvas.
Component Behavior and Event Handlers
Now you'll specify the component behavior. This introduces some new App Inventor ideas. The first is the idea of a procedure. For an overview and explanation of procedures, check out the Procedures page.
A procedure is a sequence of statements that you can refer to all at once as single command. If you have a sequence that you need to use more than once in a program, you can define that as a procedure, and then you don't have to repeat the sequence each time you use it. Procedures in App Inventor can take arguments and return values. This tutorial covers only the simplest case: procedures that take no arguments and return no values.
Define Procedures
Define two procedures:
⦁ MoveMole moves the Mole sprite to a new random position on the canvas.
⦁ UpdateScore shows the score, by changing the text of the ScoreLabel
Start with MoveMole:
⦁ In the Blocks Editor, under Built-In, open the Procedures drawer. Drag out a to procedure block and change the label "procedure" to "MoveMole".
⦁ Note: There are two similar blocks: procedure then do and procedure then resu;t. Here you should use procedure then do.
⦁ The to MoveMole block has a slot labeled "do". That's where you put the statements for the procedure. In this case there will be two statements: one to set the mole's x position and one to set its y position. In each case, you'll set the position to be a random fraction, between 0 and 1, of the difference between the size of the canvas and the size of the mole. You create that value using blocks for random fraction and multiplication and subtraction. You can find these in the Math drawer.
⦁ Build the MoveMole procedure. The completed definition should look like this:
MoveMole does not take any arguments so you don't have to use the mutator function of the procedure block. Observe how the blocks connect together: the first statement uses the Mole.X set block to set mole's horizontal position. The value plugged into the block's socket is the result of multiplying:
1. The result of the call random fraction block, which a value between 0 and 1
2. The result of subtracting the mole's width from the canvas width
The vertical position is handled similarly.
With MoveMole done, the next step is to define a variable called score to hold the score (number of hits) and give it initial value 0. Also define a procedure UpdateScore that shows the score in ScoreLabel. The actual contents to be shown in ScoreLabel will be the text "Score: " joined to the value of score.
⦁ To create the "Score: " part of the label, drag out a text block from the Text drawer. Change the block to read "Score: " rather than " ".
⦁ Use a join block to attach this to a block that gives the value of the score variable. You can find the join block in the Text drawer.
Here's how score and UpdateScore should look:
Add a Timer
The next step is to make the mole keep moving. Here's where you'll use MoleTimer. Clock components have an event handler called when ... Timer that triggers repeatedly at a rate determined by the TimerInterval.
Set up MoleTimer to call MoveMole each time the timer fires, by building the event handler like this:
Notice how the mole starts jumping around on the phone as soon as you define the event handler. This is an example of how things in App Inventor start happening instantaneously, as soon as you define them.
Add a Mole Touch Handler
The program should increment the score each time the mole is touched. Sprites, like canvases, respond to touch events. So create a touch event handler for Mole that:
1. Increments the score.
2. Calls UpdateScore to show the new score.
3. Makes the phone vibrate for 1/10 second (100 milliseconds).
4. Calls MoveMole so that the mole moves right away, rather than waiting for the timer.
Here's what this looks like in blocks. Go ahead and assemble the when Mole.Touched blocks as shown.
Here's a tip: You can use typeblocking: typing to quickly create blocks.
⦁ To create a value block containing 100, just type 100 and press return.
⦁ To create a MoveMole block, just type MoveMole and select the block you want from the list
Reset the Score
One final detail is resetting the score. That's simply a matter of making the ResetButton change the score to 0 and calling UpdateScore.
Complete Program
Here's the complete MoleMash program:
Variations
Once you get the game working, you might want to explore some variations. For example:
⦁ Make the game vary the speed of the mole in response to how well the player is doing. To vary how quickly the mole moves, you'll need to change the MoleTimer's Interval property.
⦁ Keep track of when the player hits the mole and when the player misses the mole, and show a score with both hits and misses. To do this, you'll need do define touched handlers both for Mole, same as now, and for MyCanvas. One subtle issue, if the player touches the mole, does that also count as a touch for MyCanvas? The answer is yes. Both touch events will register.
Review
Here are some of the ideas covered in this project:
⦁ Sprites are touch-sensitive shapes that you can program to move around on a Canvas.
⦁ The Clock component can be used as a timer to make events that happen at regular intervals.
⦁ Procedures are defined using to blocks.
⦁ For each procedure you define, App Inventor automatically creates an associated call block and places it in the My Definitions drawer.
⦁ Making a random-fraction block produces a number between 0 and 1.
⦁ Text blocks specify literal text, similar to the way that number blocks specify literal numbers.
⦁ Typeblocking is a way to create blocks quickly, by typing a block's name.
Scan the Sample App to your Phone
Scan the following barcode onto your phone to install and run the sample app.
Download Source Code
If you'd like to work with this sample in App Inventor, download the source code to your computer, then open App Inventor, click Projects, choose Import project (.aia) from my computer..., and select the source code you just downloaded.
Done with MoleMash? Return to the other tutorials here.
Tutorial Version: App Inventor 2
Tutorial Difficulty: Basic
Tutorial Type: Sprites Clock Timer Game
댓글 없음:
댓글 쓰기
|
__label__pos
| 0.772548 |
digitalmars.com
Last update Mon Aug 13 23:54:08 2018
User Defined Literals in the D Programming Language
April 6, 2011
written by Walter Bright
Programming languages define many kinds of literals. The most common ones are:
The C programming language adds some more, like:
While programming languages allow the user to define his own types, functions, and variable names, it's pretty rare to allow defining one's own literals. It sure would be nice to be able to do so to go with defining types.
Why not? One answer comes from how programming language compilers are designed. The compiler operates as a series of passes:
1. lexing
2. parsing
3. semantic analysis
4. optimization
5. code generation
Literals are recognized in the lexing phase, while user defined things are recognized in the semantic analysis phase. Having the semantic phase feed back into the lexing phase tends to make for a mess of both the language and the compilers for it. Most language designers eschew doing that with the fervor of an english professor reviewing one of my essays.
But, darn it, it sure would be nice to have user defined literals. Hmmmm.
Let's harken back to C's octal integers, i.e. things like 0677. The leading zero makes it octal rather than decimal. Who the heck uses octal, and why is it in C? It turns out that many old machines (before the dawn of man) were programmed in octal rather than hexadecimal. The rise of the microcomputer pretty much killed off octal in favor of hexadecimal. A vestige of octal remains in the file permissions on Posix operating systems.
It's pretty much all that's left of octal.
It's rare enough that having the leading 0 meaning octal often comes as a nasty surprise to modern programmers. Hence there's pressure to remove those literals. The D programming language certainly feels that pressure.[1]
But, I like octal notation. I have a soft spot for it, it feels nice and comfortable. It's like a favorite shirt that unfortunately has too many holes in it to wear in public anymore, and frankly needs to go in the rag bin to wipe oil up from my leaky hotrod. I still like octal, though, and the thought of writing Linux file permissions
creat("filename", 0677);
as
creat("filename", ((6<<6)|(7<<3)|7));
leaves me cold.
What can we do about it?
Let's start with a D function to turn an octal string into a number:
auto octal(string s) {
uint result = 0;
foreach (octalDigit; s) {
enforce(octalDigit >= '0' && octalDigit <= '7' && result < (1u << 29));
result = (result << 3) | (octalDigit - '0');
}
return result;
}
(The enforce is error checking for valid octal digits and overflows.) We can then write file permissions as:
creat("filename", octal("677"));
But because the octal value is computed at runtime rather than compile time, this just irks me like a bug in my soup. D has a perfectly marvy feature where functions can be executed at compile time rather than run time. Let's see if this can be pressed into service.
We could try:
enum mode = octal("677");
creat("filename", mode);
and that'll work at compile time. (D enums are manifest constants.) But of course that is hardly a workable user defined literal.
Another way to force a function to be run at compile time is to wrap it in a template,
auto octalImpl(string s) {
... same implementation as above ...
}
template octal(string s) {
enum octal = octalImpl(s);
}
D templates can use the ‘eponymous name trick’ where if there is only one member of the template and it matches the name of the template, the template gets replaced by its member.
It is then used like:
creat("filename", octal!"677");
(Templates with only one argument can be called with the name!arg syntax.) This is not looking half bad. But we can make it even better:
creat("filename", octal!677);
Wait, what? Isn't 677 a decimal literal? Yes. The trick is to overload the octal template to take an integer literal, then take the number apart digit by digit and rebuild it as octal:
auto octalImpl(uint i) {
uint result = 0;
int n;
while (i) {
auto octalDigit = i % 10;
i /= 10;
enforce(octalDigit < 8 && result < (1u << 29));
result |= octalDigit << n;
n += 3;
}
return result;
}
template octal(uint i) {
enum octal = octalImpl(i);
}
This all happens at compile time, which can be verified by looking at the output for
int main() {
creat("filename", octal!677);
return 0;
}
which is:
__Dmain:
push 01BFh // octal 677 in hexadecimal
mov EAX,offset FLAT:_DATA
push EAX
call near ptr _creat
xor EAX,EAX
add ESP,8
ret
Here is the complete code conceived and implemented by Adam D. Ruppe.
The implementation is a fair bit more involved than above, but for good reason; the idea stays the same. The complete library implementation detects and minds the usual integral suffixes and automatically switches to 64-bit representation when the input string is too large — just as you'd expect from a well-behaved literal. In fact, the code is not unlike the code handling C-style octal literals inside the compiler. That this can all be done in ‘user space’ is, I think, quite remarkable.
Conclusion
While this isn't technically a user defined literal, it came surprisingly close to one: flexible notation, compile-time evaluation — all with user-defined code, not code hardwired in the compiler.
The key to user-defined literals is compile-time evaluation of complex code (in this case, code that computes octal values from decimal values or strings). Putting ‘octal’ in the standard library brings progress — it allows us to gracefully remove an obsolete and troublesome feature like octal literals from the language, and opens the door to all sorts of user defined literals customized for user defined types. The feature is compelling enough that we have recently decided to effectively phase out built-in C-style octal literals from the D reference compiler [2].
References
1. 0nnn octal notation considered harmful
2. Deprecate Octal Literals
Acknowledgements
Thanks to Andrei Alexandrescu, David Held, Eric Niebler, and Brad Roberts for reviewing a draft of this.
Home | Runtime Library | IDDE Reference | STL | Search | Download | Forums
|
__label__pos
| 0.926286 |
use Module::Setup::Test::Utils; use Test::More tests => 24; use Module::Setup::Flavor; use t::Flavor::FlavorTest; use t::Flavor::FlavorTest2; use t::Flavor::FlavorTestDouble; do { local $@; eval { Module::Setup::Flavor->new->loader }; like $@, qr/flavor template class is invalid: Module::Setup::Flavor/; eval { t::Flavor::FlavorTest->new->import_template( 'Module::Setup::Flavor::DUMMY' ) }; like $@, qr!Can't locate Module/Setup/Flavor/DUMMY.pm!; }; do { my @template = t::Flavor::FlavorTest->new->import_template( 't::Flavor::FlavorTestBase' ); is scalar(@template), 9; ok grep { exists $_->{file} && $_->{file} eq 'foo.txt' } @template; ok grep { exists $_->{file} && $_->{file} eq 'bar.txt' } @template; ok grep { exists $_->{plugin} && $_->{plugin} eq 'foo.pm' } @template; ok grep { exists $_->{plugin} && $_->{plugin} eq 'bar.pm' } @template; ok grep { exists $_->{config} && $_->{config}->{foo} } grep { $_->{config} } @template; # template check for (grep { exists $_->{file} && $_->{file} eq 'foo.txt' } @template) { like $_->{template}, qr/local/; } for (grep { exists $_->{file} && $_->{file} eq 'bar.txt' } @template) { like $_->{template}, qr/base/; } for (grep { exists $_->{plugin} && $_->{plugin} eq 'foo.pm' } @template) { like $_->{template}, qr/package local::foo;/; } for (grep { exists $_->{plugin} && $_->{plugin} eq 'bar.pm' } @template) { like $_->{template}, qr/package base::bar;/; } ok grep { exists $_->{foo} && $_->{foo} } @template; ok grep { exists $_->{bar} && $_->{bar} } @template; }; do { my @template = t::Flavor::FlavorTest2->new->import_template( 't::Flavor::FlavorTest2Base' ); is scalar(@template), 1; ok grep { exists $_->{file} && $_->{file} eq 'foo.txt' } @template; }; do { my @template = t::Flavor::FlavorTestDouble->new->import_template( 't::Flavor::FlavorTestBase', 't::Flavor::FlavorTestBase2' ); is scalar(@template), 10; ok !grep { exists $_->{config} && $_->{config}->{foo} } grep { $_->{config} } @template; ok grep { exists $_->{config} && $_->{config}->{base2} } grep { $_->{config} } @template; ok grep { exists $_->{file} && $_->{file} eq 'foo.txt' } @template; ok grep { exists $_->{file} && $_->{file} eq 'double.txt' } @template; ok grep { exists $_->{file} && $_->{file} eq 'base2.txt' } @template; for (grep { exists $_->{file} && $_->{file} eq 'foo.txt' } @template) { like $_->{template}, qr/base2/; } for (grep { exists $_->{file} && $_->{file} eq 'base2.txt' } @template) { like $_->{template}, qr/nya-mo/; } for (grep { exists $_->{file} && $_->{file} eq 'double' } @template) { like $_->{template}, qr/2/; } };
|
__label__pos
| 0.999766 |
#hydrogen
Subscribe
Hydrogen is a front-end web development framework used for building Shopify custom storefronts. It includes the structure, components, and tooling you need to get started so you can spend your time styling and designing features that make your brand unique.
Hydrogen: An Early Look at Server Components in the Wild
React Advanced Conference 2022React Advanced Conference 2022
7 min
Hydrogen: An Early Look at Server Components in the Wild
Shopify's Hydrogen framework has been released with an early version of React's server components. In this talk I will discuss: * What is Hydrogen? * What are server component and how are we using them? * How are they different than client and shared components? * How are server side rendering and server components different? * I'll also show examples in the wild After the talk I hope the attendees will understand the Hydrogen framework and React server components better.
The Forest for the (Abstract Syntax) Trees
React Advanced Conference 2021React Advanced Conference 2021
8 min
The Forest for the (Abstract Syntax) Trees
Call it "kickstarting", "scaffolding", "bootstrapping" or simply "typing words in a terminal and getting files to start with", this is often the first opportunity for a framework to either delight or disappoint developers. How easily can they get up and running, can they extend it with their ideal toolchain and how well will it scale? In this talk we'll explore the limitations of current solutions and examine the ways we set out to improve the developer onboarding experience of Shopify's new Hydrogen React framework and SDK.
|
__label__pos
| 0.656685 |
How QA Automation Can Be Leveraged to Add Cost Advantage
How QA Automation Can Be Leveraged to Add Cost Advantage
·
7 min read
As the economic recession intensifies, organizations are becoming more and more focused on adopting technologies that can induce cost-effective processes for recession-proofing their business. For software companies, the challenge of delivering high-quality software products while saving costs in their processes has been more significant than ever now. In most scenarios, testing processes are significantly expensive, leading to businesses compromising on the tools and infrastructure they deploy in their organizations to cut down on costs. Additionally, several organizations have laid off multiple skilled employees across different departments to stay within budget amidst this economic downturn.
How can organizations better address this economic recession?
Testing is an essential element in the entire process when it comes to ensuring good quality software, applications, and other digital products. In this pretext of an economic downswing, automating the QA or testing processes has proven to usher in cost-effective benefits for organizations, helping them address these economic limitations while ensuring high-quality software to end users.
QA automation or test automation crucially refers to the process of automating the testing procedure of software applications by leveraging automation tools.
Mostly, QA automation tools execute repetitive testing processes that help teams shift focus from performing repetitive tasks manually to emphasizing more on the complex and advanced test cases.
Automating QA processes significantly helps to improve continuous integration and delivery that involves developing, testing, and deploying crucial software tools regularly rather than in stages.
Read: Revolutionizing Mobile App Testing - A Deep Dive into Top 10 Automation Tools
How does QA Automation testing work?
QA tests are primarily written in source code by the development teams; however, one can write them using keywords if they employ codeless testing tools.
Two primary ways for automating QA testing include:
Testing of the graphical user interface (GUI), which usually refers to quality assurance testing where the user experience is mimicked.
API testing that is leveraged to test the application programming interface (API) that lacks a GUI and needs to be checked at the message layer.
How can effective QA automation help recession-proof businesses?
Despite a significant initial investment for deploying a suitable automated testing environment, QA automation enables companies to cut down business expenses in the long run and costs comparatively less than several manual processes. This is primarily because once the required automation test scripts are prepared, teams are not required to monitor the test executions or troubleshoot in case of script failure. Therefore, test automation results in delivering the best quality apps with minimal need to fix glitches post the product release, thereby reducing the overall business expenses.
How can QA automation induce cost-effectiveness in app development projects?
• Reducing the overall regression testing cycle time
• Providing quicker feedback to development teams
• Testing for developed customization where crucial elements are required for delivery
• Accommodating the requirements into automated tests to reduce costly human errors
• Employing on-demand testing methodologies as required by the customizations
Additionally, a cloud-based environment or deploying a dedicated, existing on-premises automated testing environment enables to deliver QA feedback quickly, which helps to lower project overhead.
Leveraging correct tools, automation frameworks, and test environments helps optimize test cycle time, driving better brand growth and profits in the long run. As a result, organizations should focus on deploying suitable QA automation software for improved outcomes.
• Contributing to improving ROI
As repetitive testing consumes significant time to launch the product. QA automation helps simplify this by encouraging reusability and quicker validation. This helps to launch the product faster. Additionally, reducing the number of resources needed for QA testing helps capture greater ROI in businesses.
Primary stages of QA automation
Defining the scope of the project
This stage involves examining the goals of the testing process and performing a feasibility study. Questions like:
• Which tests can be automated
• Which ones need human intervention
• The required budget, human resources, and expertise
Selecting the right automation testing tool
The teams require a tool that caters to the company's needs, but the web app's technology induces the pick. There are several automation solutions to select from; hence, price, functionality, intuitiveness, and adaptability must be considered.
QA teams must be provided with suitable instructions on how to use tools to the fullest.
Creating a strategy
The QA team needs to design a suitable test strategy to efficiently outline the project's workflow, the final outcome, and a framework for running the test cases. The primary procedures, testing tools, and standards must be included in the framework.
Setting up the right environment
This involves setting up the right testing environment and optimizing test coverage across the different scenarios is essential. Additionally, the testing team should plan and track the hardware and software installation and the testbed script development operations.
Starting writing scripts
After setting up a suitable environment, QA engineers should build appropriate test scripts based on actual requirements and scripting standards. The scripts must preferably be reusable, organized, and understandable for third parties.
Running the tests
GUI testing and API testing are the common approaches for automating QA testing that leverages suitable tools for simulating user experiences.
Analyzing and reporting
Automated programs offer reports once the tests are completed. The results reveal the flaws in the components or the defects and if the product requires further testing.
Key benefits of QA automation
QA automation enables organizations to increase the overall coverage of their testing processes. With automation, QA teams can run several automated test cases simultaneously and across multiple platforms and devices. Automation helps explore applications in-depth and evaluate the memory data, internal file structure, and data tables. This helps to enhance the quality and performance of the final software product. Automated regression testing additionally streamlines testing every app feature, which is cumbersome in manual testing.
Optimal resource utilization
Automated testing allows QA team members to apply their skills in advanced product testing. QA automation enables teams to execute functional and regression test cases with minimal or no human interference. QA test automation helps reduce the dependence on the presence of a large QA team to lower the time consumed and cost of hiring and training software testers.
QA automation helps companies to utilize experienced QA resources for creating better test cases and thereby helping to improve software quality.
Alignment with CI/CD and DevOps
In multiple scenarios, manual testing is complex to manage as the codes get more complicated or in case of an increase in the test cases. QA automation allows development organizations to switch to continuous improvement and delivery seamlessly.
Improved overall productivity
By reducing human intervention, automated testing helps development teams execute tests even late at night and obtain the findings on the following day. The automated processes minimize testing time and focus more on critical activities.
Quick feedback
QA automated testing provides immediate feedback; for instance, developers receive the testing reports instantly with fast test execution to react quickly when a problem occurs.
When the software product is launched in the market, immediate feedback helps to make further decisions regarding the software quality. Therefore, automated QA testing helps to improve team responsiveness, enhance user experiences, and greater customer satisfaction. Again, automation processes prove to be more affordable in the long run as there are no recurring costs for creating test scripts and thereby help drive ROI.
HeadSpin as a critical tool in QA automation
The data science-driven holistic testing platform, HeadSpin, helps execute end-to-end testing on real devices and capture critical performance and functional KPIs to deliver high-quality applications and flawless user experience. The platform supports more than 30 automation frameworks and performs tests in real-user scenarios.
HeadSpin's AI Testing and Dev-Ops Collaboration Platform helps companies to leverage secure real device infrastructure to perform end-to-end testing and monitoring with real devices across the globe and evaluate the actual user experiences with complete security and performance through HeadSpin's varied cloud deployment models.
Conclusion
QA automation offers critical insights into testing processes, helping to identify failures better and chalk out resolutions for improving the overall performance and functionalities of the application. Especially during an economic downturn, automation QA processes can prove to be a savior for businesses to address the challenges better while ensuring that they deliver top-notch software products to end consumers.
Article resource: This article was originally published on https://www.headspin.io/blog/need-for-automated-qa-testing
|
__label__pos
| 0.727465 |
PageRenderTime 302ms CodeModel.GetById 33ms app.highlight 227ms RepoModel.GetById 0ms app.codeStats 1ms
/drivers/video/sis/init.c
https://bitbucket.org/cyanogenmod/android_kernel_asus_tf300t
C | 3654 lines | 2881 code | 474 blank | 299 comment | 1070 complexity | 0446235e90d6fea09f8a38cd3ec6a989 MD5 | raw file
Possible License(s): LGPL-2.0, AGPL-1.0, GPL-2.0
Large files files are truncated, but you can click here to view the full file
1/* $XFree86$ */
2/* $XdotOrg$ */
3/*
4 * Mode initializing code (CRT1 section) for
5 * for SiS 300/305/540/630/730,
6 * SiS 315/550/[M]650/651/[M]661[FGM]X/[M]74x[GX]/330/[M]76x[GX],
7 * XGI Volari V3XT/V5/V8, Z7
8 * (Universal module for Linux kernel framebuffer and X.org/XFree86 4.x)
9 *
10 * Copyright (C) 2001-2005 by Thomas Winischhofer, Vienna, Austria
11 *
12 * If distributed as part of the Linux kernel, the following license terms
13 * apply:
14 *
15 * * This program is free software; you can redistribute it and/or modify
16 * * it under the terms of the GNU General Public License as published by
17 * * the Free Software Foundation; either version 2 of the named License,
18 * * or any later version.
19 * *
20 * * This program is distributed in the hope that it will be useful,
21 * * but WITHOUT ANY WARRANTY; without even the implied warranty of
22 * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
23 * * GNU General Public License for more details.
24 * *
25 * * You should have received a copy of the GNU General Public License
26 * * along with this program; if not, write to the Free Software
27 * * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA
28 *
29 * Otherwise, the following license terms apply:
30 *
31 * * Redistribution and use in source and binary forms, with or without
32 * * modification, are permitted provided that the following conditions
33 * * are met:
34 * * 1) Redistributions of source code must retain the above copyright
35 * * notice, this list of conditions and the following disclaimer.
36 * * 2) Redistributions in binary form must reproduce the above copyright
37 * * notice, this list of conditions and the following disclaimer in the
38 * * documentation and/or other materials provided with the distribution.
39 * * 3) The name of the author may not be used to endorse or promote products
40 * * derived from this software without specific prior written permission.
41 * *
42 * * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
43 * * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
44 * * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
45 * * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
46 * * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
47 * * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
48 * * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
49 * * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
50 * * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
51 * * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
52 *
53 * Author: Thomas Winischhofer <[email protected]>
54 *
55 * Formerly based on non-functional code-fragements for 300 series by SiS, Inc.
56 * Used by permission.
57 */
58
59#include "init.h"
60
61#ifdef CONFIG_FB_SIS_300
62#include "300vtbl.h"
63#endif
64
65#ifdef CONFIG_FB_SIS_315
66#include "310vtbl.h"
67#endif
68
69#if defined(ALLOC_PRAGMA)
70#pragma alloc_text(PAGE,SiSSetMode)
71#endif
72
73/*********************************************/
74/* POINTER INITIALIZATION */
75/*********************************************/
76
77#if defined(CONFIG_FB_SIS_300) || defined(CONFIG_FB_SIS_315)
78static void
79InitCommonPointer(struct SiS_Private *SiS_Pr)
80{
81 SiS_Pr->SiS_SModeIDTable = SiS_SModeIDTable;
82 SiS_Pr->SiS_StResInfo = SiS_StResInfo;
83 SiS_Pr->SiS_ModeResInfo = SiS_ModeResInfo;
84 SiS_Pr->SiS_StandTable = SiS_StandTable;
85
86 SiS_Pr->SiS_NTSCTiming = SiS_NTSCTiming;
87 SiS_Pr->SiS_PALTiming = SiS_PALTiming;
88 SiS_Pr->SiS_HiTVSt1Timing = SiS_HiTVSt1Timing;
89 SiS_Pr->SiS_HiTVSt2Timing = SiS_HiTVSt2Timing;
90
91 SiS_Pr->SiS_HiTVExtTiming = SiS_HiTVExtTiming;
92 SiS_Pr->SiS_HiTVGroup3Data = SiS_HiTVGroup3Data;
93 SiS_Pr->SiS_HiTVGroup3Simu = SiS_HiTVGroup3Simu;
94#if 0
95 SiS_Pr->SiS_HiTVTextTiming = SiS_HiTVTextTiming;
96 SiS_Pr->SiS_HiTVGroup3Text = SiS_HiTVGroup3Text;
97#endif
98
99 SiS_Pr->SiS_StPALData = SiS_StPALData;
100 SiS_Pr->SiS_ExtPALData = SiS_ExtPALData;
101 SiS_Pr->SiS_StNTSCData = SiS_StNTSCData;
102 SiS_Pr->SiS_ExtNTSCData = SiS_ExtNTSCData;
103 SiS_Pr->SiS_St1HiTVData = SiS_StHiTVData;
104 SiS_Pr->SiS_St2HiTVData = SiS_St2HiTVData;
105 SiS_Pr->SiS_ExtHiTVData = SiS_ExtHiTVData;
106 SiS_Pr->SiS_St525iData = SiS_StNTSCData;
107 SiS_Pr->SiS_St525pData = SiS_St525pData;
108 SiS_Pr->SiS_St750pData = SiS_St750pData;
109 SiS_Pr->SiS_Ext525iData = SiS_ExtNTSCData;
110 SiS_Pr->SiS_Ext525pData = SiS_ExtNTSCData;
111 SiS_Pr->SiS_Ext750pData = SiS_Ext750pData;
112
113 SiS_Pr->pSiS_OutputSelect = &SiS_OutputSelect;
114 SiS_Pr->pSiS_SoftSetting = &SiS_SoftSetting;
115
116 SiS_Pr->SiS_LCD1280x720Data = SiS_LCD1280x720Data;
117 SiS_Pr->SiS_StLCD1280x768_2Data = SiS_StLCD1280x768_2Data;
118 SiS_Pr->SiS_ExtLCD1280x768_2Data = SiS_ExtLCD1280x768_2Data;
119 SiS_Pr->SiS_LCD1280x800Data = SiS_LCD1280x800Data;
120 SiS_Pr->SiS_LCD1280x800_2Data = SiS_LCD1280x800_2Data;
121 SiS_Pr->SiS_LCD1280x854Data = SiS_LCD1280x854Data;
122 SiS_Pr->SiS_LCD1280x960Data = SiS_LCD1280x960Data;
123 SiS_Pr->SiS_StLCD1400x1050Data = SiS_StLCD1400x1050Data;
124 SiS_Pr->SiS_ExtLCD1400x1050Data = SiS_ExtLCD1400x1050Data;
125 SiS_Pr->SiS_LCD1680x1050Data = SiS_LCD1680x1050Data;
126 SiS_Pr->SiS_StLCD1600x1200Data = SiS_StLCD1600x1200Data;
127 SiS_Pr->SiS_ExtLCD1600x1200Data = SiS_ExtLCD1600x1200Data;
128 SiS_Pr->SiS_NoScaleData = SiS_NoScaleData;
129
130 SiS_Pr->SiS_LVDS320x240Data_1 = SiS_LVDS320x240Data_1;
131 SiS_Pr->SiS_LVDS320x240Data_2 = SiS_LVDS320x240Data_2;
132 SiS_Pr->SiS_LVDS640x480Data_1 = SiS_LVDS640x480Data_1;
133 SiS_Pr->SiS_LVDS800x600Data_1 = SiS_LVDS800x600Data_1;
134 SiS_Pr->SiS_LVDS1024x600Data_1 = SiS_LVDS1024x600Data_1;
135 SiS_Pr->SiS_LVDS1024x768Data_1 = SiS_LVDS1024x768Data_1;
136
137 SiS_Pr->SiS_LVDSCRT1320x240_1 = SiS_LVDSCRT1320x240_1;
138 SiS_Pr->SiS_LVDSCRT1320x240_2 = SiS_LVDSCRT1320x240_2;
139 SiS_Pr->SiS_LVDSCRT1320x240_2_H = SiS_LVDSCRT1320x240_2_H;
140 SiS_Pr->SiS_LVDSCRT1320x240_3 = SiS_LVDSCRT1320x240_3;
141 SiS_Pr->SiS_LVDSCRT1320x240_3_H = SiS_LVDSCRT1320x240_3_H;
142 SiS_Pr->SiS_LVDSCRT1640x480_1 = SiS_LVDSCRT1640x480_1;
143 SiS_Pr->SiS_LVDSCRT1640x480_1_H = SiS_LVDSCRT1640x480_1_H;
144#if 0
145 SiS_Pr->SiS_LVDSCRT11024x600_1 = SiS_LVDSCRT11024x600_1;
146 SiS_Pr->SiS_LVDSCRT11024x600_1_H = SiS_LVDSCRT11024x600_1_H;
147 SiS_Pr->SiS_LVDSCRT11024x600_2 = SiS_LVDSCRT11024x600_2;
148 SiS_Pr->SiS_LVDSCRT11024x600_2_H = SiS_LVDSCRT11024x600_2_H;
149#endif
150
151 SiS_Pr->SiS_CHTVUNTSCData = SiS_CHTVUNTSCData;
152 SiS_Pr->SiS_CHTVONTSCData = SiS_CHTVONTSCData;
153
154 SiS_Pr->SiS_PanelMinLVDS = Panel_800x600; /* lowest value LVDS/LCDA */
155 SiS_Pr->SiS_PanelMin301 = Panel_1024x768; /* lowest value 301 */
156}
157#endif
158
159#ifdef CONFIG_FB_SIS_300
160static void
161InitTo300Pointer(struct SiS_Private *SiS_Pr)
162{
163 InitCommonPointer(SiS_Pr);
164
165 SiS_Pr->SiS_VBModeIDTable = SiS300_VBModeIDTable;
166 SiS_Pr->SiS_EModeIDTable = SiS300_EModeIDTable;
167 SiS_Pr->SiS_RefIndex = SiS300_RefIndex;
168 SiS_Pr->SiS_CRT1Table = SiS300_CRT1Table;
169 if(SiS_Pr->ChipType == SIS_300) {
170 SiS_Pr->SiS_MCLKData_0 = SiS300_MCLKData_300; /* 300 */
171 } else {
172 SiS_Pr->SiS_MCLKData_0 = SiS300_MCLKData_630; /* 630, 730 */
173 }
174 SiS_Pr->SiS_VCLKData = SiS300_VCLKData;
175 SiS_Pr->SiS_VBVCLKData = (struct SiS_VBVCLKData *)SiS300_VCLKData;
176
177 SiS_Pr->SiS_SR15 = SiS300_SR15;
178
179 SiS_Pr->SiS_PanelDelayTbl = SiS300_PanelDelayTbl;
180 SiS_Pr->SiS_PanelDelayTblLVDS = SiS300_PanelDelayTbl;
181
182 SiS_Pr->SiS_ExtLCD1024x768Data = SiS300_ExtLCD1024x768Data;
183 SiS_Pr->SiS_St2LCD1024x768Data = SiS300_St2LCD1024x768Data;
184 SiS_Pr->SiS_ExtLCD1280x1024Data = SiS300_ExtLCD1280x1024Data;
185 SiS_Pr->SiS_St2LCD1280x1024Data = SiS300_St2LCD1280x1024Data;
186
187 SiS_Pr->SiS_CRT2Part2_1024x768_1 = SiS300_CRT2Part2_1024x768_1;
188 SiS_Pr->SiS_CRT2Part2_1024x768_2 = SiS300_CRT2Part2_1024x768_2;
189 SiS_Pr->SiS_CRT2Part2_1024x768_3 = SiS300_CRT2Part2_1024x768_3;
190
191 SiS_Pr->SiS_CHTVUPALData = SiS300_CHTVUPALData;
192 SiS_Pr->SiS_CHTVOPALData = SiS300_CHTVOPALData;
193 SiS_Pr->SiS_CHTVUPALMData = SiS_CHTVUNTSCData; /* not supported on 300 series */
194 SiS_Pr->SiS_CHTVOPALMData = SiS_CHTVONTSCData; /* not supported on 300 series */
195 SiS_Pr->SiS_CHTVUPALNData = SiS300_CHTVUPALData; /* not supported on 300 series */
196 SiS_Pr->SiS_CHTVOPALNData = SiS300_CHTVOPALData; /* not supported on 300 series */
197 SiS_Pr->SiS_CHTVSOPALData = SiS300_CHTVSOPALData;
198
199 SiS_Pr->SiS_LVDS848x480Data_1 = SiS300_LVDS848x480Data_1;
200 SiS_Pr->SiS_LVDS848x480Data_2 = SiS300_LVDS848x480Data_2;
201 SiS_Pr->SiS_LVDSBARCO1024Data_1 = SiS300_LVDSBARCO1024Data_1;
202 SiS_Pr->SiS_LVDSBARCO1366Data_1 = SiS300_LVDSBARCO1366Data_1;
203 SiS_Pr->SiS_LVDSBARCO1366Data_2 = SiS300_LVDSBARCO1366Data_2;
204
205 SiS_Pr->SiS_PanelType04_1a = SiS300_PanelType04_1a;
206 SiS_Pr->SiS_PanelType04_2a = SiS300_PanelType04_2a;
207 SiS_Pr->SiS_PanelType04_1b = SiS300_PanelType04_1b;
208 SiS_Pr->SiS_PanelType04_2b = SiS300_PanelType04_2b;
209
210 SiS_Pr->SiS_CHTVCRT1UNTSC = SiS300_CHTVCRT1UNTSC;
211 SiS_Pr->SiS_CHTVCRT1ONTSC = SiS300_CHTVCRT1ONTSC;
212 SiS_Pr->SiS_CHTVCRT1UPAL = SiS300_CHTVCRT1UPAL;
213 SiS_Pr->SiS_CHTVCRT1OPAL = SiS300_CHTVCRT1OPAL;
214 SiS_Pr->SiS_CHTVCRT1SOPAL = SiS300_CHTVCRT1SOPAL;
215 SiS_Pr->SiS_CHTVReg_UNTSC = SiS300_CHTVReg_UNTSC;
216 SiS_Pr->SiS_CHTVReg_ONTSC = SiS300_CHTVReg_ONTSC;
217 SiS_Pr->SiS_CHTVReg_UPAL = SiS300_CHTVReg_UPAL;
218 SiS_Pr->SiS_CHTVReg_OPAL = SiS300_CHTVReg_OPAL;
219 SiS_Pr->SiS_CHTVReg_UPALM = SiS300_CHTVReg_UNTSC; /* not supported on 300 series */
220 SiS_Pr->SiS_CHTVReg_OPALM = SiS300_CHTVReg_ONTSC; /* not supported on 300 series */
221 SiS_Pr->SiS_CHTVReg_UPALN = SiS300_CHTVReg_UPAL; /* not supported on 300 series */
222 SiS_Pr->SiS_CHTVReg_OPALN = SiS300_CHTVReg_OPAL; /* not supported on 300 series */
223 SiS_Pr->SiS_CHTVReg_SOPAL = SiS300_CHTVReg_SOPAL;
224 SiS_Pr->SiS_CHTVVCLKUNTSC = SiS300_CHTVVCLKUNTSC;
225 SiS_Pr->SiS_CHTVVCLKONTSC = SiS300_CHTVVCLKONTSC;
226 SiS_Pr->SiS_CHTVVCLKUPAL = SiS300_CHTVVCLKUPAL;
227 SiS_Pr->SiS_CHTVVCLKOPAL = SiS300_CHTVVCLKOPAL;
228 SiS_Pr->SiS_CHTVVCLKUPALM = SiS300_CHTVVCLKUNTSC; /* not supported on 300 series */
229 SiS_Pr->SiS_CHTVVCLKOPALM = SiS300_CHTVVCLKONTSC; /* not supported on 300 series */
230 SiS_Pr->SiS_CHTVVCLKUPALN = SiS300_CHTVVCLKUPAL; /* not supported on 300 series */
231 SiS_Pr->SiS_CHTVVCLKOPALN = SiS300_CHTVVCLKOPAL; /* not supported on 300 series */
232 SiS_Pr->SiS_CHTVVCLKSOPAL = SiS300_CHTVVCLKSOPAL;
233}
234#endif
235
236#ifdef CONFIG_FB_SIS_315
237static void
238InitTo310Pointer(struct SiS_Private *SiS_Pr)
239{
240 InitCommonPointer(SiS_Pr);
241
242 SiS_Pr->SiS_EModeIDTable = SiS310_EModeIDTable;
243 SiS_Pr->SiS_RefIndex = SiS310_RefIndex;
244 SiS_Pr->SiS_CRT1Table = SiS310_CRT1Table;
245 if(SiS_Pr->ChipType >= SIS_340) {
246 SiS_Pr->SiS_MCLKData_0 = SiS310_MCLKData_0_340; /* 340 + XGI */
247 } else if(SiS_Pr->ChipType >= SIS_761) {
248 SiS_Pr->SiS_MCLKData_0 = SiS310_MCLKData_0_761; /* 761 - preliminary */
249 } else if(SiS_Pr->ChipType >= SIS_760) {
250 SiS_Pr->SiS_MCLKData_0 = SiS310_MCLKData_0_760; /* 760 */
251 } else if(SiS_Pr->ChipType >= SIS_661) {
252 SiS_Pr->SiS_MCLKData_0 = SiS310_MCLKData_0_660; /* 661/741 */
253 } else if(SiS_Pr->ChipType == SIS_330) {
254 SiS_Pr->SiS_MCLKData_0 = SiS310_MCLKData_0_330; /* 330 */
255 } else if(SiS_Pr->ChipType > SIS_315PRO) {
256 SiS_Pr->SiS_MCLKData_0 = SiS310_MCLKData_0_650; /* 550, 650, 740 */
257 } else {
258 SiS_Pr->SiS_MCLKData_0 = SiS310_MCLKData_0_315; /* 315 */
259 }
260 if(SiS_Pr->ChipType >= SIS_340) {
261 SiS_Pr->SiS_MCLKData_1 = SiS310_MCLKData_1_340;
262 } else {
263 SiS_Pr->SiS_MCLKData_1 = SiS310_MCLKData_1;
264 }
265 SiS_Pr->SiS_VCLKData = SiS310_VCLKData;
266 SiS_Pr->SiS_VBVCLKData = SiS310_VBVCLKData;
267
268 SiS_Pr->SiS_SR15 = SiS310_SR15;
269
270 SiS_Pr->SiS_PanelDelayTbl = SiS310_PanelDelayTbl;
271 SiS_Pr->SiS_PanelDelayTblLVDS = SiS310_PanelDelayTblLVDS;
272
273 SiS_Pr->SiS_St2LCD1024x768Data = SiS310_St2LCD1024x768Data;
274 SiS_Pr->SiS_ExtLCD1024x768Data = SiS310_ExtLCD1024x768Data;
275 SiS_Pr->SiS_St2LCD1280x1024Data = SiS310_St2LCD1280x1024Data;
276 SiS_Pr->SiS_ExtLCD1280x1024Data = SiS310_ExtLCD1280x1024Data;
277
278 SiS_Pr->SiS_CRT2Part2_1024x768_1 = SiS310_CRT2Part2_1024x768_1;
279
280 SiS_Pr->SiS_CHTVUPALData = SiS310_CHTVUPALData;
281 SiS_Pr->SiS_CHTVOPALData = SiS310_CHTVOPALData;
282 SiS_Pr->SiS_CHTVUPALMData = SiS310_CHTVUPALMData;
283 SiS_Pr->SiS_CHTVOPALMData = SiS310_CHTVOPALMData;
284 SiS_Pr->SiS_CHTVUPALNData = SiS310_CHTVUPALNData;
285 SiS_Pr->SiS_CHTVOPALNData = SiS310_CHTVOPALNData;
286 SiS_Pr->SiS_CHTVSOPALData = SiS310_CHTVSOPALData;
287
288 SiS_Pr->SiS_CHTVCRT1UNTSC = SiS310_CHTVCRT1UNTSC;
289 SiS_Pr->SiS_CHTVCRT1ONTSC = SiS310_CHTVCRT1ONTSC;
290 SiS_Pr->SiS_CHTVCRT1UPAL = SiS310_CHTVCRT1UPAL;
291 SiS_Pr->SiS_CHTVCRT1OPAL = SiS310_CHTVCRT1OPAL;
292 SiS_Pr->SiS_CHTVCRT1SOPAL = SiS310_CHTVCRT1OPAL;
293
294 SiS_Pr->SiS_CHTVReg_UNTSC = SiS310_CHTVReg_UNTSC;
295 SiS_Pr->SiS_CHTVReg_ONTSC = SiS310_CHTVReg_ONTSC;
296 SiS_Pr->SiS_CHTVReg_UPAL = SiS310_CHTVReg_UPAL;
297 SiS_Pr->SiS_CHTVReg_OPAL = SiS310_CHTVReg_OPAL;
298 SiS_Pr->SiS_CHTVReg_UPALM = SiS310_CHTVReg_UPALM;
299 SiS_Pr->SiS_CHTVReg_OPALM = SiS310_CHTVReg_OPALM;
300 SiS_Pr->SiS_CHTVReg_UPALN = SiS310_CHTVReg_UPALN;
301 SiS_Pr->SiS_CHTVReg_OPALN = SiS310_CHTVReg_OPALN;
302 SiS_Pr->SiS_CHTVReg_SOPAL = SiS310_CHTVReg_OPAL;
303
304 SiS_Pr->SiS_CHTVVCLKUNTSC = SiS310_CHTVVCLKUNTSC;
305 SiS_Pr->SiS_CHTVVCLKONTSC = SiS310_CHTVVCLKONTSC;
306 SiS_Pr->SiS_CHTVVCLKUPAL = SiS310_CHTVVCLKUPAL;
307 SiS_Pr->SiS_CHTVVCLKOPAL = SiS310_CHTVVCLKOPAL;
308 SiS_Pr->SiS_CHTVVCLKUPALM = SiS310_CHTVVCLKUPALM;
309 SiS_Pr->SiS_CHTVVCLKOPALM = SiS310_CHTVVCLKOPALM;
310 SiS_Pr->SiS_CHTVVCLKUPALN = SiS310_CHTVVCLKUPALN;
311 SiS_Pr->SiS_CHTVVCLKOPALN = SiS310_CHTVVCLKOPALN;
312 SiS_Pr->SiS_CHTVVCLKSOPAL = SiS310_CHTVVCLKOPAL;
313}
314#endif
315
316bool
317SiSInitPtr(struct SiS_Private *SiS_Pr)
318{
319 if(SiS_Pr->ChipType < SIS_315H) {
320#ifdef CONFIG_FB_SIS_300
321 InitTo300Pointer(SiS_Pr);
322#else
323 return false;
324#endif
325 } else {
326#ifdef CONFIG_FB_SIS_315
327 InitTo310Pointer(SiS_Pr);
328#else
329 return false;
330#endif
331 }
332 return true;
333}
334
335/*********************************************/
336/* HELPER: Get ModeID */
337/*********************************************/
338
339static
340unsigned short
341SiS_GetModeID(int VGAEngine, unsigned int VBFlags, int HDisplay, int VDisplay,
342 int Depth, bool FSTN, int LCDwidth, int LCDheight)
343{
344 unsigned short ModeIndex = 0;
345
346 switch(HDisplay)
347 {
348 case 320:
349 if(VDisplay == 200) ModeIndex = ModeIndex_320x200[Depth];
350 else if(VDisplay == 240) {
351 if((VBFlags & CRT2_LCD) && (FSTN))
352 ModeIndex = ModeIndex_320x240_FSTN[Depth];
353 else
354 ModeIndex = ModeIndex_320x240[Depth];
355 }
356 break;
357 case 400:
358 if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 800) && (LCDwidth >= 600))) {
359 if(VDisplay == 300) ModeIndex = ModeIndex_400x300[Depth];
360 }
361 break;
362 case 512:
363 if((!(VBFlags & CRT1_LCDA)) || ((LCDwidth >= 1024) && (LCDwidth >= 768))) {
364 if(VDisplay == 384) ModeIndex = ModeIndex_512x384[Depth];
365 }
366 break;
367 case 640:
368 if(VDisplay == 480) ModeIndex = ModeIndex_640x480[Depth];
369 else if(VDisplay == 400) ModeIndex = ModeIndex_640x400[Depth];
370 break;
371 case 720:
372 if(VDisplay == 480) ModeIndex = ModeIndex_720x480[Depth];
373 else if(VDisplay == 576) ModeIndex = ModeIndex_720x576[Depth];
374 break;
375 case 768:
376 if(VDisplay == 576) ModeIndex = ModeIndex_768x576[Depth];
377 break;
378 case 800:
379 if(VDisplay == 600) ModeIndex = ModeIndex_800x600[Depth];
380 else if(VDisplay == 480) ModeIndex = ModeIndex_800x480[Depth];
381 break;
382 case 848:
383 if(VDisplay == 480) ModeIndex = ModeIndex_848x480[Depth];
384 break;
385 case 856:
386 if(VDisplay == 480) ModeIndex = ModeIndex_856x480[Depth];
387 break;
388 case 960:
389 if(VGAEngine == SIS_315_VGA) {
390 if(VDisplay == 540) ModeIndex = ModeIndex_960x540[Depth];
391 else if(VDisplay == 600) ModeIndex = ModeIndex_960x600[Depth];
392 }
393 break;
394 case 1024:
395 if(VDisplay == 576) ModeIndex = ModeIndex_1024x576[Depth];
396 else if(VDisplay == 768) ModeIndex = ModeIndex_1024x768[Depth];
397 else if(VGAEngine == SIS_300_VGA) {
398 if(VDisplay == 600) ModeIndex = ModeIndex_1024x600[Depth];
399 }
400 break;
401 case 1152:
402 if(VDisplay == 864) ModeIndex = ModeIndex_1152x864[Depth];
403 if(VGAEngine == SIS_300_VGA) {
404 if(VDisplay == 768) ModeIndex = ModeIndex_1152x768[Depth];
405 }
406 break;
407 case 1280:
408 switch(VDisplay) {
409 case 720:
410 ModeIndex = ModeIndex_1280x720[Depth];
411 break;
412 case 768:
413 if(VGAEngine == SIS_300_VGA) {
414 ModeIndex = ModeIndex_300_1280x768[Depth];
415 } else {
416 ModeIndex = ModeIndex_310_1280x768[Depth];
417 }
418 break;
419 case 800:
420 if(VGAEngine == SIS_315_VGA) {
421 ModeIndex = ModeIndex_1280x800[Depth];
422 }
423 break;
424 case 854:
425 if(VGAEngine == SIS_315_VGA) {
426 ModeIndex = ModeIndex_1280x854[Depth];
427 }
428 break;
429 case 960:
430 ModeIndex = ModeIndex_1280x960[Depth];
431 break;
432 case 1024:
433 ModeIndex = ModeIndex_1280x1024[Depth];
434 break;
435 }
436 break;
437 case 1360:
438 if(VDisplay == 768) ModeIndex = ModeIndex_1360x768[Depth];
439 if(VGAEngine == SIS_300_VGA) {
440 if(VDisplay == 1024) ModeIndex = ModeIndex_300_1360x1024[Depth];
441 }
442 break;
443 case 1400:
444 if(VGAEngine == SIS_315_VGA) {
445 if(VDisplay == 1050) {
446 ModeIndex = ModeIndex_1400x1050[Depth];
447 }
448 }
449 break;
450 case 1600:
451 if(VDisplay == 1200) ModeIndex = ModeIndex_1600x1200[Depth];
452 break;
453 case 1680:
454 if(VGAEngine == SIS_315_VGA) {
455 if(VDisplay == 1050) ModeIndex = ModeIndex_1680x1050[Depth];
456 }
457 break;
458 case 1920:
459 if(VDisplay == 1440) ModeIndex = ModeIndex_1920x1440[Depth];
460 else if(VGAEngine == SIS_315_VGA) {
461 if(VDisplay == 1080) ModeIndex = ModeIndex_1920x1080[Depth];
462 }
463 break;
464 case 2048:
465 if(VDisplay == 1536) {
466 if(VGAEngine == SIS_300_VGA) {
467 ModeIndex = ModeIndex_300_2048x1536[Depth];
468 } else {
469 ModeIndex = ModeIndex_310_2048x1536[Depth];
470 }
471 }
472 break;
473 }
474
475 return ModeIndex;
476}
477
478unsigned short
479SiS_GetModeID_LCD(int VGAEngine, unsigned int VBFlags, int HDisplay, int VDisplay,
480 int Depth, bool FSTN, unsigned short CustomT, int LCDwidth, int LCDheight,
481 unsigned int VBFlags2)
482{
483 unsigned short ModeIndex = 0;
484
485 if(VBFlags2 & (VB2_LVDS | VB2_30xBDH)) {
486
487 switch(HDisplay)
488 {
489 case 320:
490 if((CustomT != CUT_PANEL848) && (CustomT != CUT_PANEL856)) {
491 if(VDisplay == 200) {
492 if(!FSTN) ModeIndex = ModeIndex_320x200[Depth];
493 } else if(VDisplay == 240) {
494 if(!FSTN) ModeIndex = ModeIndex_320x240[Depth];
495 else if(VGAEngine == SIS_315_VGA) {
496 ModeIndex = ModeIndex_320x240_FSTN[Depth];
497 }
498 }
499 }
500 break;
501 case 400:
502 if((CustomT != CUT_PANEL848) && (CustomT != CUT_PANEL856)) {
503 if(!((VGAEngine == SIS_300_VGA) && (VBFlags2 & VB2_TRUMPION))) {
504 if(VDisplay == 300) ModeIndex = ModeIndex_400x300[Depth];
505 }
506 }
507 break;
508 case 512:
509 if((CustomT != CUT_PANEL848) && (CustomT != CUT_PANEL856)) {
510 if(!((VGAEngine == SIS_300_VGA) && (VBFlags2 & VB2_TRUMPION))) {
511 if(LCDwidth >= 1024 && LCDwidth != 1152 && LCDheight >= 768) {
512 if(VDisplay == 384) {
513 ModeIndex = ModeIndex_512x384[Depth];
514 }
515 }
516 }
517 }
518 break;
519 case 640:
520 if(VDisplay == 480) ModeIndex = ModeIndex_640x480[Depth];
521 else if(VDisplay == 400) {
522 if((CustomT != CUT_PANEL848) && (CustomT != CUT_PANEL856))
523 ModeIndex = ModeIndex_640x400[Depth];
524 }
525 break;
526 case 800:
527 if(VDisplay == 600) ModeIndex = ModeIndex_800x600[Depth];
528 break;
529 case 848:
530 if(CustomT == CUT_PANEL848) {
531 if(VDisplay == 480) ModeIndex = ModeIndex_848x480[Depth];
532 }
533 break;
534 case 856:
535 if(CustomT == CUT_PANEL856) {
536 if(VDisplay == 480) ModeIndex = ModeIndex_856x480[Depth];
537 }
538 break;
539 case 1024:
540 if(VDisplay == 768) ModeIndex = ModeIndex_1024x768[Depth];
541 else if(VGAEngine == SIS_300_VGA) {
542 if((VDisplay == 600) && (LCDheight == 600)) {
543 ModeIndex = ModeIndex_1024x600[Depth];
544 }
545 }
546 break;
547 case 1152:
548 if(VGAEngine == SIS_300_VGA) {
549 if((VDisplay == 768) && (LCDheight == 768)) {
550 ModeIndex = ModeIndex_1152x768[Depth];
551 }
552 }
553 break;
554 case 1280:
555 if(VDisplay == 1024) ModeIndex = ModeIndex_1280x1024[Depth];
556 else if(VGAEngine == SIS_315_VGA) {
557 if((VDisplay == 768) && (LCDheight == 768)) {
558 ModeIndex = ModeIndex_310_1280x768[Depth];
559 }
560 }
561 break;
562 case 1360:
563 if(VGAEngine == SIS_300_VGA) {
564 if(CustomT == CUT_BARCO1366) {
565 if(VDisplay == 1024) ModeIndex = ModeIndex_300_1360x1024[Depth];
566 }
567 }
568 if(CustomT == CUT_PANEL848) {
569 if(VDisplay == 768) ModeIndex = ModeIndex_1360x768[Depth];
570 }
571 break;
572 case 1400:
573 if(VGAEngine == SIS_315_VGA) {
574 if(VDisplay == 1050) ModeIndex = ModeIndex_1400x1050[Depth];
575 }
576 break;
577 case 1600:
578 if(VGAEngine == SIS_315_VGA) {
579 if(VDisplay == 1200) ModeIndex = ModeIndex_1600x1200[Depth];
580 }
581 break;
582 }
583
584 } else if(VBFlags2 & VB2_SISBRIDGE) {
585
586 switch(HDisplay)
587 {
588 case 320:
589 if(VDisplay == 200) ModeIndex = ModeIndex_320x200[Depth];
590 else if(VDisplay == 240) ModeIndex = ModeIndex_320x240[Depth];
591 break;
592 case 400:
593 if(LCDwidth >= 800 && LCDheight >= 600) {
594 if(VDisplay == 300) ModeIndex = ModeIndex_400x300[Depth];
595 }
596 break;
597 case 512:
598 if(LCDwidth >= 1024 && LCDheight >= 768 && LCDwidth != 1152) {
599 if(VDisplay == 384) ModeIndex = ModeIndex_512x384[Depth];
600 }
601 break;
602 case 640:
603 if(VDisplay == 480) ModeIndex = ModeIndex_640x480[Depth];
604 else if(VDisplay == 400) ModeIndex = ModeIndex_640x400[Depth];
605 break;
606 case 720:
607 if(VGAEngine == SIS_315_VGA) {
608 if(VDisplay == 480) ModeIndex = ModeIndex_720x480[Depth];
609 else if(VDisplay == 576) ModeIndex = ModeIndex_720x576[Depth];
610 }
611 break;
612 case 768:
613 if(VGAEngine == SIS_315_VGA) {
614 if(VDisplay == 576) ModeIndex = ModeIndex_768x576[Depth];
615 }
616 break;
617 case 800:
618 if(VDisplay == 600) ModeIndex = ModeIndex_800x600[Depth];
619 if(VGAEngine == SIS_315_VGA) {
620 if(VDisplay == 480) ModeIndex = ModeIndex_800x480[Depth];
621 }
622 break;
623 case 848:
624 if(VGAEngine == SIS_315_VGA) {
625 if(VDisplay == 480) ModeIndex = ModeIndex_848x480[Depth];
626 }
627 break;
628 case 856:
629 if(VGAEngine == SIS_315_VGA) {
630 if(VDisplay == 480) ModeIndex = ModeIndex_856x480[Depth];
631 }
632 break;
633 case 960:
634 if(VGAEngine == SIS_315_VGA) {
635 if(VDisplay == 540) ModeIndex = ModeIndex_960x540[Depth];
636 else if(VDisplay == 600) ModeIndex = ModeIndex_960x600[Depth];
637 }
638 break;
639 case 1024:
640 if(VDisplay == 768) ModeIndex = ModeIndex_1024x768[Depth];
641 if(VGAEngine == SIS_315_VGA) {
642 if(VDisplay == 576) ModeIndex = ModeIndex_1024x576[Depth];
643 }
644 break;
645 case 1152:
646 if(VGAEngine == SIS_315_VGA) {
647 if(VDisplay == 864) ModeIndex = ModeIndex_1152x864[Depth];
648 }
649 break;
650 case 1280:
651 switch(VDisplay) {
652 case 720:
653 ModeIndex = ModeIndex_1280x720[Depth];
654 case 768:
655 if(VGAEngine == SIS_300_VGA) {
656 ModeIndex = ModeIndex_300_1280x768[Depth];
657 } else {
658 ModeIndex = ModeIndex_310_1280x768[Depth];
659 }
660 break;
661 case 800:
662 if(VGAEngine == SIS_315_VGA) {
663 ModeIndex = ModeIndex_1280x800[Depth];
664 }
665 break;
666 case 854:
667 if(VGAEngine == SIS_315_VGA) {
668 ModeIndex = ModeIndex_1280x854[Depth];
669 }
670 break;
671 case 960:
672 ModeIndex = ModeIndex_1280x960[Depth];
673 break;
674 case 1024:
675 ModeIndex = ModeIndex_1280x1024[Depth];
676 break;
677 }
678 break;
679 case 1360:
680 if(VGAEngine == SIS_315_VGA) { /* OVER1280 only? */
681 if(VDisplay == 768) ModeIndex = ModeIndex_1360x768[Depth];
682 }
683 break;
684 case 1400:
685 if(VGAEngine == SIS_315_VGA) {
686 if(VBFlags2 & VB2_LCDOVER1280BRIDGE) {
687 if(VDisplay == 1050) ModeIndex = ModeIndex_1400x1050[Depth];
688 }
689 }
690 break;
691 case 1600:
692 if(VGAEngine == SIS_315_VGA) {
693 if(VBFlags2 & VB2_LCDOVER1280BRIDGE) {
694 if(VDisplay == 1200) ModeIndex = ModeIndex_1600x1200[Depth];
695 }
696 }
697 break;
698#ifndef VB_FORBID_CRT2LCD_OVER_1600
699 case 1680:
700 if(VGAEngine == SIS_315_VGA) {
701 if(VBFlags2 & VB2_LCDOVER1280BRIDGE) {
702 if(VDisplay == 1050) ModeIndex = ModeIndex_1680x1050[Depth];
703 }
704 }
705 break;
706 case 1920:
707 if(VGAEngine == SIS_315_VGA) {
708 if(VBFlags2 & VB2_LCDOVER1600BRIDGE) {
709 if(VDisplay == 1440) ModeIndex = ModeIndex_1920x1440[Depth];
710 }
711 }
712 break;
713 case 2048:
714 if(VGAEngine == SIS_315_VGA) {
715 if(VBFlags2 & VB2_LCDOVER1600BRIDGE) {
716 if(VDisplay == 1536) ModeIndex = ModeIndex_310_2048x1536[Depth];
717 }
718 }
719 break;
720#endif
721 }
722 }
723
724 return ModeIndex;
725}
726
727unsigned short
728SiS_GetModeID_TV(int VGAEngine, unsigned int VBFlags, int HDisplay, int VDisplay, int Depth,
729 unsigned int VBFlags2)
730{
731 unsigned short ModeIndex = 0;
732
733 if(VBFlags2 & VB2_CHRONTEL) {
734
735 switch(HDisplay)
736 {
737 case 512:
738 if(VGAEngine == SIS_315_VGA) {
739 if(VDisplay == 384) ModeIndex = ModeIndex_512x384[Depth];
740 }
741 break;
742 case 640:
743 if(VDisplay == 480) ModeIndex = ModeIndex_640x480[Depth];
744 else if(VDisplay == 400) ModeIndex = ModeIndex_640x400[Depth];
745 break;
746 case 800:
747 if(VDisplay == 600) ModeIndex = ModeIndex_800x600[Depth];
748 break;
749 case 1024:
750 if(VGAEngine == SIS_315_VGA) {
751 if(VDisplay == 768) ModeIndex = ModeIndex_1024x768[Depth];
752 }
753 break;
754 }
755
756 } else if(VBFlags2 & VB2_SISTVBRIDGE) {
757
758 switch(HDisplay)
759 {
760 case 320:
761 if(VDisplay == 200) ModeIndex = ModeIndex_320x200[Depth];
762 else if(VDisplay == 240) ModeIndex = ModeIndex_320x240[Depth];
763 break;
764 case 400:
765 if(VDisplay == 300) ModeIndex = ModeIndex_400x300[Depth];
766 break;
767 case 512:
768 if( ((VBFlags & TV_YPBPR) && (VBFlags & (TV_YPBPR750P | TV_YPBPR1080I))) ||
769 (VBFlags & TV_HIVISION) ||
770 ((!(VBFlags & (TV_YPBPR | TV_PALM))) && (VBFlags & TV_PAL)) ) {
771 if(VDisplay == 384) ModeIndex = ModeIndex_512x384[Depth];
772 }
773 break;
774 case 640:
775 if(VDisplay == 480) ModeIndex = ModeIndex_640x480[Depth];
776 else if(VDisplay == 400) ModeIndex = ModeIndex_640x400[Depth];
777 break;
778 case 720:
779 if((!(VBFlags & TV_HIVISION)) && (!((VBFlags & TV_YPBPR) && (VBFlags & TV_YPBPR1080I)))) {
780 if(VDisplay == 480) {
781 ModeIndex = ModeIndex_720x480[Depth];
782 } else if(VDisplay == 576) {
783 if( ((VBFlags & TV_YPBPR) && (VBFlags & TV_YPBPR750P)) ||
784 ((!(VBFlags & (TV_YPBPR | TV_PALM))) && (VBFlags & TV_PAL)) )
785 ModeIndex = ModeIndex_720x576[Depth];
786 }
787 }
788 break;
789 case 768:
790 if((!(VBFlags & TV_HIVISION)) && (!((VBFlags & TV_YPBPR) && (VBFlags & TV_YPBPR1080I)))) {
791 if( ((VBFlags & TV_YPBPR) && (VBFlags & TV_YPBPR750P)) ||
792 ((!(VBFlags & (TV_YPBPR | TV_PALM))) && (VBFlags & TV_PAL)) ) {
793 if(VDisplay == 576) ModeIndex = ModeIndex_768x576[Depth];
794 }
795 }
796 break;
797 case 800:
798 if(VDisplay == 600) ModeIndex = ModeIndex_800x600[Depth];
799 else if(VDisplay == 480) {
800 if(!((VBFlags & TV_YPBPR) && (VBFlags & TV_YPBPR750P))) {
801 ModeIndex = ModeIndex_800x480[Depth];
802 }
803 }
804 break;
805 case 960:
806 if(VGAEngine == SIS_315_VGA) {
807 if(VDisplay == 600) {
808 if((VBFlags & TV_HIVISION) || ((VBFlags & TV_YPBPR) && (VBFlags & TV_YPBPR1080I))) {
809 ModeIndex = ModeIndex_960x600[Depth];
810 }
811 }
812 }
813 break;
814 case 1024:
815 if(VDisplay == 768) {
816 if(VBFlags2 & VB2_30xBLV) {
817 ModeIndex = ModeIndex_1024x768[Depth];
818 }
819 } else if(VDisplay == 576) {
820 if( (VBFlags & TV_HIVISION) ||
821 ((VBFlags & TV_YPBPR) && (VBFlags & TV_YPBPR1080I)) ||
822 ((VBFlags2 & VB2_30xBLV) &&
823 ((!(VBFlags & (TV_YPBPR | TV_PALM))) && (VBFlags & TV_PAL))) ) {
824 ModeIndex = ModeIndex_1024x576[Depth];
825 }
826 }
827 break;
828 case 1280:
829 if(VDisplay == 720) {
830 if((VBFlags & TV_HIVISION) ||
831 ((VBFlags & TV_YPBPR) && (VBFlags & (TV_YPBPR1080I | TV_YPBPR750P)))) {
832 ModeIndex = ModeIndex_1280x720[Depth];
833 }
834 } else if(VDisplay == 1024) {
835 if((VBFlags & TV_HIVISION) ||
836 ((VBFlags & TV_YPBPR) && (VBFlags & TV_YPBPR1080I))) {
837 ModeIndex = ModeIndex_1280x1024[Depth];
838 }
839 }
840 break;
841 }
842 }
843 return ModeIndex;
844}
845
846unsigned short
847SiS_GetModeID_VGA2(int VGAEngine, unsigned int VBFlags, int HDisplay, int VDisplay, int Depth,
848 unsigned int VBFlags2)
849{
850 if(!(VBFlags2 & VB2_SISVGA2BRIDGE)) return 0;
851
852 if(HDisplay >= 1920) return 0;
853
854 switch(HDisplay)
855 {
856 case 1600:
857 if(VDisplay == 1200) {
858 if(VGAEngine != SIS_315_VGA) return 0;
859 if(!(VBFlags2 & VB2_30xB)) return 0;
860 }
861 break;
862 case 1680:
863 if(VDisplay == 1050) {
864 if(VGAEngine != SIS_315_VGA) return 0;
865 if(!(VBFlags2 & VB2_30xB)) return 0;
866 }
867 break;
868 }
869
870 return SiS_GetModeID(VGAEngine, 0, HDisplay, VDisplay, Depth, false, 0, 0);
871}
872
873
874/*********************************************/
875/* HELPER: SetReg, GetReg */
876/*********************************************/
877
878void
879SiS_SetReg(SISIOADDRESS port, u8 index, u8 data)
880{
881 outb(index, port);
882 outb(data, port + 1);
883}
884
885void
886SiS_SetRegByte(SISIOADDRESS port, u8 data)
887{
888 outb(data, port);
889}
890
891void
892SiS_SetRegShort(SISIOADDRESS port, u16 data)
893{
894 outw(data, port);
895}
896
897void
898SiS_SetRegLong(SISIOADDRESS port, u32 data)
899{
900 outl(data, port);
901}
902
903u8
904SiS_GetReg(SISIOADDRESS port, u8 index)
905{
906 outb(index, port);
907 return inb(port + 1);
908}
909
910u8
911SiS_GetRegByte(SISIOADDRESS port)
912{
913 return inb(port);
914}
915
916u16
917SiS_GetRegShort(SISIOADDRESS port)
918{
919 return inw(port);
920}
921
922u32
923SiS_GetRegLong(SISIOADDRESS port)
924{
925 return inl(port);
926}
927
928void
929SiS_SetRegANDOR(SISIOADDRESS Port, u8 Index, u8 DataAND, u8 DataOR)
930{
931 u8 temp;
932
933 temp = SiS_GetReg(Port, Index);
934 temp = (temp & (DataAND)) | DataOR;
935 SiS_SetReg(Port, Index, temp);
936}
937
938void
939SiS_SetRegAND(SISIOADDRESS Port, u8 Index, u8 DataAND)
940{
941 u8 temp;
942
943 temp = SiS_GetReg(Port, Index);
944 temp &= DataAND;
945 SiS_SetReg(Port, Index, temp);
946}
947
948void
949SiS_SetRegOR(SISIOADDRESS Port, u8 Index, u8 DataOR)
950{
951 u8 temp;
952
953 temp = SiS_GetReg(Port, Index);
954 temp |= DataOR;
955 SiS_SetReg(Port, Index, temp);
956}
957
958/*********************************************/
959/* HELPER: DisplayOn, DisplayOff */
960/*********************************************/
961
962void
963SiS_DisplayOn(struct SiS_Private *SiS_Pr)
964{
965 SiS_SetRegAND(SiS_Pr->SiS_P3c4,0x01,0xDF);
966}
967
968void
969SiS_DisplayOff(struct SiS_Private *SiS_Pr)
970{
971 SiS_SetRegOR(SiS_Pr->SiS_P3c4,0x01,0x20);
972}
973
974
975/*********************************************/
976/* HELPER: Init Port Addresses */
977/*********************************************/
978
979void
980SiSRegInit(struct SiS_Private *SiS_Pr, SISIOADDRESS BaseAddr)
981{
982 SiS_Pr->SiS_P3c4 = BaseAddr + 0x14;
983 SiS_Pr->SiS_P3d4 = BaseAddr + 0x24;
984 SiS_Pr->SiS_P3c0 = BaseAddr + 0x10;
985 SiS_Pr->SiS_P3ce = BaseAddr + 0x1e;
986 SiS_Pr->SiS_P3c2 = BaseAddr + 0x12;
987 SiS_Pr->SiS_P3ca = BaseAddr + 0x1a;
988 SiS_Pr->SiS_P3c6 = BaseAddr + 0x16;
989 SiS_Pr->SiS_P3c7 = BaseAddr + 0x17;
990 SiS_Pr->SiS_P3c8 = BaseAddr + 0x18;
991 SiS_Pr->SiS_P3c9 = BaseAddr + 0x19;
992 SiS_Pr->SiS_P3cb = BaseAddr + 0x1b;
993 SiS_Pr->SiS_P3cc = BaseAddr + 0x1c;
994 SiS_Pr->SiS_P3cd = BaseAddr + 0x1d;
995 SiS_Pr->SiS_P3da = BaseAddr + 0x2a;
996 SiS_Pr->SiS_Part1Port = BaseAddr + SIS_CRT2_PORT_04;
997 SiS_Pr->SiS_Part2Port = BaseAddr + SIS_CRT2_PORT_10;
998 SiS_Pr->SiS_Part3Port = BaseAddr + SIS_CRT2_PORT_12;
999 SiS_Pr->SiS_Part4Port = BaseAddr + SIS_CRT2_PORT_14;
1000 SiS_Pr->SiS_Part5Port = BaseAddr + SIS_CRT2_PORT_14 + 2;
1001 SiS_Pr->SiS_DDC_Port = BaseAddr + 0x14;
1002 SiS_Pr->SiS_VidCapt = BaseAddr + SIS_VIDEO_CAPTURE;
1003 SiS_Pr->SiS_VidPlay = BaseAddr + SIS_VIDEO_PLAYBACK;
1004}
1005
1006/*********************************************/
1007/* HELPER: GetSysFlags */
1008/*********************************************/
1009
1010static void
1011SiS_GetSysFlags(struct SiS_Private *SiS_Pr)
1012{
1013 unsigned char cr5f, temp1, temp2;
1014
1015 /* 661 and newer: NEVER write non-zero to SR11[7:4] */
1016 /* (SR11 is used for DDC and in enable/disablebridge) */
1017 SiS_Pr->SiS_SensibleSR11 = false;
1018 SiS_Pr->SiS_MyCR63 = 0x63;
1019 if(SiS_Pr->ChipType >= SIS_330) {
1020 SiS_Pr->SiS_MyCR63 = 0x53;
1021 if(SiS_Pr->ChipType >= SIS_661) {
1022 SiS_Pr->SiS_SensibleSR11 = true;
1023 }
1024 }
1025
1026 /* You should use the macros, not these flags directly */
1027
1028 SiS_Pr->SiS_SysFlags = 0;
1029 if(SiS_Pr->ChipType == SIS_650) {
1030 cr5f = SiS_GetReg(SiS_Pr->SiS_P3d4,0x5f) & 0xf0;
1031 SiS_SetRegAND(SiS_Pr->SiS_P3d4,0x5c,0x07);
1032 temp1 = SiS_GetReg(SiS_Pr->SiS_P3d4,0x5c) & 0xf8;
1033 SiS_SetRegOR(SiS_Pr->SiS_P3d4,0x5c,0xf8);
1034 temp2 = SiS_GetReg(SiS_Pr->SiS_P3d4,0x5c) & 0xf8;
1035 if((!temp1) || (temp2)) {
1036 switch(cr5f) {
1037 case 0x80:
1038 case 0x90:
1039 case 0xc0:
1040 SiS_Pr->SiS_SysFlags |= SF_IsM650;
1041 break;
1042 case 0xa0:
1043 case 0xb0:
1044 case 0xe0:
1045 SiS_Pr->SiS_SysFlags |= SF_Is651;
1046 break;
1047 }
1048 } else {
1049 switch(cr5f) {
1050 case 0x90:
1051 temp1 = SiS_GetReg(SiS_Pr->SiS_P3d4,0x5c) & 0xf8;
1052 switch(temp1) {
1053 case 0x00: SiS_Pr->SiS_SysFlags |= SF_IsM652; break;
1054 case 0x40: SiS_Pr->SiS_SysFlags |= SF_IsM653; break;
1055 default: SiS_Pr->SiS_SysFlags |= SF_IsM650; break;
1056 }
1057 break;
1058 case 0xb0:
1059 SiS_Pr->SiS_SysFlags |= SF_Is652;
1060 break;
1061 default:
1062 SiS_Pr->SiS_SysFlags |= SF_IsM650;
1063 break;
1064 }
1065 }
1066 }
1067
1068 if(SiS_Pr->ChipType >= SIS_760 && SiS_Pr->ChipType <= SIS_761) {
1069 if(SiS_GetReg(SiS_Pr->SiS_P3d4,0x78) & 0x30) {
1070 SiS_Pr->SiS_SysFlags |= SF_760LFB;
1071 }
1072 if(SiS_GetReg(SiS_Pr->SiS_P3d4,0x79) & 0xf0) {
1073 SiS_Pr->SiS_SysFlags |= SF_760UMA;
1074 }
1075 }
1076}
1077
1078/*********************************************/
1079/* HELPER: Init PCI & Engines */
1080/*********************************************/
1081
1082static void
1083SiSInitPCIetc(struct SiS_Private *SiS_Pr)
1084{
1085 switch(SiS_Pr->ChipType) {
1086#ifdef CONFIG_FB_SIS_300
1087 case SIS_300:
1088 case SIS_540:
1089 case SIS_630:
1090 case SIS_730:
1091 /* Set - PCI LINEAR ADDRESSING ENABLE (0x80)
1092 * - RELOCATED VGA IO ENABLED (0x20)
1093 * - MMIO ENABLED (0x01)
1094 * Leave other bits untouched.
1095 */
1096 SiS_SetRegOR(SiS_Pr->SiS_P3c4,0x20,0xa1);
1097 /* - Enable 2D (0x40)
1098 * - Enable 3D (0x02)
1099 * - Enable 3D Vertex command fetch (0x10) ?
1100 * - Enable 3D command parser (0x08) ?
1101 */
1102 SiS_SetRegOR(SiS_Pr->SiS_P3c4,0x1E,0x5A);
1103 break;
1104#endif
1105#ifdef CONFIG_FB_SIS_315
1106 case SIS_315H:
1107 case SIS_315:
1108 case SIS_315PRO:
1109 case SIS_650:
1110 case SIS_740:
1111 case SIS_330:
1112 case SIS_661:
1113 case SIS_741:
1114 case SIS_660:
1115 case SIS_760:
1116 case SIS_761:
1117 case SIS_340:
1118 case XGI_40:
1119 /* See above */
1120 SiS_SetRegOR(SiS_Pr->SiS_P3c4,0x20,0xa1);
1121 /* - Enable 3D G/L transformation engine (0x80)
1122 * - Enable 2D (0x40)
1123 * - Enable 3D vertex command fetch (0x10)
1124 * - Enable 3D command parser (0x08)
1125 * - Enable 3D (0x02)
1126 */
1127 SiS_SetRegOR(SiS_Pr->SiS_P3c4,0x1E,0xDA);
1128 break;
1129 case XGI_20:
1130 case SIS_550:
1131 /* See above */
1132 SiS_SetRegOR(SiS_Pr->SiS_P3c4,0x20,0xa1);
1133 /* No 3D engine ! */
1134 /* - Enable 2D (0x40)
1135 * - disable 3D
1136 */
1137 SiS_SetRegANDOR(SiS_Pr->SiS_P3c4,0x1E,0x60,0x40);
1138 break;
1139#endif
1140 default:
1141 break;
1142 }
1143}
1144
1145/*********************************************/
1146/* HELPER: SetLVDSetc */
1147/*********************************************/
1148
1149static
1150void
1151SiSSetLVDSetc(struct SiS_Private *SiS_Pr)
1152{
1153 unsigned short temp;
1154
1155 SiS_Pr->SiS_IF_DEF_LVDS = 0;
1156 SiS_Pr->SiS_IF_DEF_TRUMPION = 0;
1157 SiS_Pr->SiS_IF_DEF_CH70xx = 0;
1158 SiS_Pr->SiS_IF_DEF_CONEX = 0;
1159
1160 SiS_Pr->SiS_ChrontelInit = 0;
1161
1162 if(SiS_Pr->ChipType == XGI_20) return;
1163
1164 /* Check for SiS30x first */
1165 temp = SiS_GetReg(SiS_Pr->SiS_Part4Port,0x00);
1166 if((temp == 1) || (temp == 2)) return;
1167
1168 switch(SiS_Pr->ChipType) {
1169#ifdef CONFIG_FB_SIS_300
1170 case SIS_540:
1171 case SIS_630:
1172 case SIS_730:
1173 temp = (SiS_GetReg(SiS_Pr->SiS_P3d4,0x37) & 0x0e) >> 1;
1174 if((temp >= 2) && (temp <= 5)) SiS_Pr->SiS_IF_DEF_LVDS = 1;
1175 if(temp == 3) SiS_Pr->SiS_IF_DEF_TRUMPION = 1;
1176 if((temp == 4) || (temp == 5)) {
1177 /* Save power status (and error check) - UNUSED */
1178 SiS_Pr->SiS_Backup70xx = SiS_GetCH700x(SiS_Pr, 0x0e);
1179 SiS_Pr->SiS_IF_DEF_CH70xx = 1;
1180 }
1181 break;
1182#endif
1183#ifdef CONFIG_FB_SIS_315
1184 case SIS_550:
1185 case SIS_650:
1186 case SIS_740:
1187 case SIS_330:
1188 temp = (SiS_GetReg(SiS_Pr->SiS_P3d4,0x37) & 0x0e) >> 1;
1189 if((temp >= 2) && (temp <= 3)) SiS_Pr->SiS_IF_DEF_LVDS = 1;
1190 if(temp == 3) SiS_Pr->SiS_IF_DEF_CH70xx = 2;
1191 break;
1192 case SIS_661:
1193 case SIS_741:
1194 case SIS_660:
1195 case SIS_760:
1196 case SIS_761:
1197 case SIS_340:
1198 case XGI_20:
1199 case XGI_40:
1200 temp = (SiS_GetReg(SiS_Pr->SiS_P3d4,0x38) & 0xe0) >> 5;
1201 if((temp >= 2) && (temp <= 3)) SiS_Pr->SiS_IF_DEF_LVDS = 1;
1202 if(temp == 3) SiS_Pr->SiS_IF_DEF_CH70xx = 2;
1203 if(temp == 4) SiS_Pr->SiS_IF_DEF_CONEX = 1; /* Not yet supported */
1204 break;
1205#endif
1206 default:
1207 break;
1208 }
1209}
1210
1211/*********************************************/
1212/* HELPER: Enable DSTN/FSTN */
1213/*********************************************/
1214
1215void
1216SiS_SetEnableDstn(struct SiS_Private *SiS_Pr, int enable)
1217{
1218 SiS_Pr->SiS_IF_DEF_DSTN = enable ? 1 : 0;
1219}
1220
1221void
1222SiS_SetEnableFstn(struct SiS_Private *SiS_Pr, int enable)
1223{
1224 SiS_Pr->SiS_IF_DEF_FSTN = enable ? 1 : 0;
1225}
1226
1227/*********************************************/
1228/* HELPER: Get modeflag */
1229/*********************************************/
1230
1231unsigned short
1232SiS_GetModeFlag(struct SiS_Private *SiS_Pr, unsigned short ModeNo,
1233 unsigned short ModeIdIndex)
1234{
1235 if(SiS_Pr->UseCustomMode) {
1236 return SiS_Pr->CModeFlag;
1237 } else if(ModeNo <= 0x13) {
1238 return SiS_Pr->SiS_SModeIDTable[ModeIdIndex].St_ModeFlag;
1239 } else {
1240 return SiS_Pr->SiS_EModeIDTable[ModeIdIndex].Ext_ModeFlag;
1241 }
1242}
1243
1244/*********************************************/
1245/* HELPER: Determine ROM usage */
1246/*********************************************/
1247
1248bool
1249SiSDetermineROMLayout661(struct SiS_Private *SiS_Pr)
1250{
1251 unsigned char *ROMAddr = SiS_Pr->VirtualRomBase;
1252 unsigned short romversoffs, romvmaj = 1, romvmin = 0;
1253
1254 if(SiS_Pr->ChipType >= XGI_20) {
1255 /* XGI ROMs don't qualify */
1256 return false;
1257 } else if(SiS_Pr->ChipType >= SIS_761) {
1258 /* I very much assume 761, 340 and newer will use new layout */
1259 return true;
1260 } else if(SiS_Pr->ChipType >= SIS_661) {
1261 if((ROMAddr[0x1a] == 'N') &&
1262 (ROMAddr[0x1b] == 'e') &&
1263 (ROMAddr[0x1c] == 'w') &&
1264 (ROMAddr[0x1d] == 'V')) {
1265 return true;
1266 }
1267 romversoffs = ROMAddr[0x16] | (ROMAddr[0x17] << 8);
1268 if(romversoffs) {
1269 if((ROMAddr[romversoffs+1] == '.') || (ROMAddr[romversoffs+4] == '.')) {
1270 romvmaj = ROMAddr[romversoffs] - '0';
1271 romvmin = ((ROMAddr[romversoffs+2] -'0') * 10) + (ROMAddr[romversoffs+3] - '0');
1272 }
1273 }
1274 if((romvmaj != 0) || (romvmin >= 92)) {
1275 return true;
1276 }
1277 } else if(IS_SIS650740) {
1278 if((ROMAddr[0x1a] == 'N') &&
1279 (ROMAddr[0x1b] == 'e') &&
1280 (ROMAddr[0x1c] == 'w') &&
1281 (ROMAddr[0x1d] == 'V')) {
1282 return true;
1283 }
1284 }
1285 return false;
1286}
1287
1288static void
1289SiSDetermineROMUsage(struct SiS_Private *SiS_Pr)
1290{
1291 unsigned char *ROMAddr = SiS_Pr->VirtualRomBase;
1292 unsigned short romptr = 0;
1293
1294 SiS_Pr->SiS_UseROM = false;
1295 SiS_Pr->SiS_ROMNew = false;
1296 SiS_Pr->SiS_PWDOffset = 0;
1297
1298 if(SiS_Pr->ChipType >= XGI_20) return;
1299
1300 if((ROMAddr) && (SiS_Pr->UseROM)) {
1301 if(SiS_Pr->ChipType == SIS_300) {
1302 /* 300: We check if the code starts below 0x220 by
1303 * checking the jmp instruction at the beginning
1304 * of the BIOS image.
1305 */
1306 if((ROMAddr[3] == 0xe9) && ((ROMAddr[5] << 8) | ROMAddr[4]) > 0x21a)
1307 SiS_Pr->SiS_UseROM = true;
1308 } else if(SiS_Pr->ChipType < SIS_315H) {
1309 /* Sony's VAIO BIOS 1.09 follows the standard, so perhaps
1310 * the others do as well
1311 */
1312 SiS_Pr->SiS_UseROM = true;
1313 } else {
1314 /* 315/330 series stick to the standard(s) */
1315 SiS_Pr->SiS_UseROM = true;
1316 if((SiS_Pr->SiS_ROMNew = SiSDetermineROMLayout661(SiS_Pr))) {
1317 SiS_Pr->SiS_EMIOffset = 14;
1318 SiS_Pr->SiS_PWDOffset = 17;
1319 SiS_Pr->SiS661LCD2TableSize = 36;
1320 /* Find out about LCD data table entry size */
1321 if((romptr = SISGETROMW(0x0102))) {
1322 if(ROMAddr[romptr + (32 * 16)] == 0xff)
1323 SiS_Pr->SiS661LCD2TableSize = 32;
1324 else if(ROMAddr[romptr + (34 * 16)] == 0xff)
1325 SiS_Pr->SiS661LCD2TableSize = 34;
1326 else if(ROMAddr[romptr + (36 * 16)] == 0xff) /* 0.94, 2.05.00+ */
1327 SiS_Pr->SiS661LCD2TableSize = 36;
1328 else if( (ROMAddr[romptr + (38 * 16)] == 0xff) || /* 2.00.00 - 2.02.00 */
1329 (ROMAddr[0x6F] & 0x01) ) { /* 2.03.00 - <2.05.00 */
1330 SiS_Pr->SiS661LCD2TableSize = 38; /* UMC data layout abandoned at 2.05.00 */
1331 SiS_Pr->SiS_EMIOffset = 16;
1332 SiS_Pr->SiS_PWDOffset = 19;
1333 }
1334 }
1335 }
1336 }
1337 }
1338}
1339
1340/*********************************************/
1341/* HELPER: SET SEGMENT REGISTERS */
1342/*********************************************/
1343
1344static void
1345SiS_SetSegRegLower(struct SiS_Private *SiS_Pr, unsigned short value)
1346{
1347 unsigned short temp;
1348
1349 value &= 0x00ff;
1350 temp = SiS_GetRegByte(SiS_Pr->SiS_P3cb) & 0xf0;
1351 temp |= (value >> 4);
1352 SiS_SetRegByte(SiS_Pr->SiS_P3cb, temp);
1353 temp = SiS_GetRegByte(SiS_Pr->SiS_P3cd) & 0xf0;
1354 temp |= (value & 0x0f);
1355 SiS_SetRegByte(SiS_Pr->SiS_P3cd, temp);
1356}
1357
1358static void
1359SiS_SetSegRegUpper(struct SiS_Private *SiS_Pr, unsigned short value)
1360{
1361 unsigned short temp;
1362
1363 value &= 0x00ff;
1364 temp = SiS_GetRegByte(SiS_Pr->SiS_P3cb) & 0x0f;
1365 temp |= (value & 0xf0);
1366 SiS_SetRegByte(SiS_Pr->SiS_P3cb, temp);
1367 temp = SiS_GetRegByte(SiS_Pr->SiS_P3cd) & 0x0f;
1368 temp |= (value << 4);
1369 SiS_SetRegByte(SiS_Pr->SiS_P3cd, temp);
1370}
1371
1372static void
1373SiS_SetSegmentReg(struct SiS_Private *SiS_Pr, unsigned short value)
1374{
1375 SiS_SetSegRegLower(SiS_Pr, value);
1376 SiS_SetSegRegUpper(SiS_Pr, value);
1377}
1378
1379static void
1380SiS_ResetSegmentReg(struct SiS_Private *SiS_Pr)
1381{
1382 SiS_SetSegmentReg(SiS_Pr, 0);
1383}
1384
1385static void
1386SiS_SetSegmentRegOver(struct SiS_Private *SiS_Pr, unsigned short value)
1387{
1388 unsigned short temp = value >> 8;
1389
1390 temp &= 0x07;
1391 temp |= (temp << 4);
1392 SiS_SetReg(SiS_Pr->SiS_P3c4,0x1d,temp);
1393 SiS_SetSegmentReg(SiS_Pr, value);
1394}
1395
1396static void
1397SiS_ResetSegmentRegOver(struct SiS_Private *SiS_Pr)
1398{
1399 SiS_SetSegmentRegOver(SiS_Pr, 0);
1400}
1401
1402static void
1403SiS_ResetSegmentRegisters(struct SiS_Private *SiS_Pr)
1404{
1405 if((IS_SIS65x) || (SiS_Pr->ChipType >= SIS_661)) {
1406 SiS_ResetSegmentReg(SiS_Pr);
1407 SiS_ResetSegmentRegOver(SiS_Pr);
1408 }
1409}
1410
1411/*********************************************/
1412/* HELPER: GetVBType */
1413/*********************************************/
1414
1415static
1416void
1417SiS_GetVBType(struct SiS_Private *SiS_Pr)
1418{
1419 unsigned short flag = 0, rev = 0, nolcd = 0;
1420 unsigned short p4_0f, p4_25, p4_27;
1421
1422 SiS_Pr->SiS_VBType = 0;
1423
1424 if((SiS_Pr->SiS_IF_DEF_LVDS) || (SiS_Pr->SiS_IF_DEF_CONEX))
1425 return;
1426
1427 if(SiS_Pr->ChipType == XGI_20)
1428 return;
1429
1430 flag = SiS_GetReg(SiS_Pr->SiS_Part4Port,0x00);
1431
1432 if(flag > 3)
1433 return;
1434
1435 rev = SiS_GetReg(SiS_Pr->SiS_Part4Port,0x01);
1436
1437 if(flag >= 2) {
1438 SiS_Pr->SiS_VBType = VB_SIS302B;
1439 } else if(flag == 1) {
1440 if(rev >= 0xC0) {
1441 SiS_Pr->SiS_VBType = VB_SIS301C;
1442 } else if(rev >= 0xB0) {
1443 SiS_Pr->SiS_VBType = VB_SIS301B;
1444 /* Check if 30xB DH version (no LCD support, use Panel Link instead) */
1445 nolcd = SiS_GetReg(SiS_Pr->SiS_Part4Port,0x23);
1446 if(!(nolcd & 0x02)) SiS_Pr->SiS_VBType |= VB_NoLCD;
1447 } else {
1448 SiS_Pr->SiS_VBType = VB_SIS301;
1449 }
1450 }
1451 if(SiS_Pr->SiS_VBType & (VB_SIS301B | VB_SIS301C | VB_SIS302B)) {
1452 if(rev >= 0xE0) {
1453 flag = SiS_GetReg(SiS_Pr->SiS_Part4Port,0x39);
1454 if(flag == 0xff) SiS_Pr->SiS_VBType = VB_SIS302LV;
1455 else SiS_Pr->SiS_VBType = VB_SIS301C; /* VB_SIS302ELV; */
1456 } else if(rev >= 0xD0) {
1457 SiS_Pr->SiS_VBType = VB_SIS301LV;
1458 }
1459 }
1460 if(SiS_Pr->SiS_VBType & (VB_SIS301C | VB_SIS301LV | VB_SIS302LV | VB_SIS302ELV)) {
1461 p4_0f = SiS_GetReg(SiS_Pr->SiS_Part4Port,0x0f);
1462 p4_25 = SiS_GetReg(SiS_Pr->SiS_Part4Port,0x25);
1463 p4_27 = SiS_GetReg(SiS_Pr->SiS_Part4Port,0x27);
1464 SiS_SetRegAND(SiS_Pr->SiS_Part4Port,0x0f,0x7f);
1465 SiS_SetRegOR(SiS_Pr->SiS_Part4Port,0x25,0x08);
1466 SiS_SetRegAND(SiS_Pr->SiS_Part4Port,0x27,0xfd);
1467 if(SiS_GetReg(SiS_Pr->SiS_Part4Port,0x26) & 0x08) {
1468 SiS_Pr->SiS_VBType |= VB_UMC;
1469 }
1470 SiS_SetReg(SiS_Pr->SiS_Part4Port,0x27,p4_27);
1471 SiS_SetReg(SiS_Pr->SiS_Part4Port,0x25,p4_25);
1472 SiS_SetReg(SiS_Pr->SiS_Part4Port,0x0f,p4_0f);
1473 }
1474}
1475
1476/*********************************************/
1477/* HELPER: Check RAM size */
1478/*********************************************/
1479
1480static bool
1481SiS_CheckMemorySize(struct SiS_Private *SiS_Pr, unsigned short ModeNo,
1482 unsigned short ModeIdIndex)
1483{
1484 unsigned short AdapterMemSize = SiS_Pr->VideoMemorySize / (1024*1024);
1485 unsigned short modeflag = SiS_GetModeFlag(SiS_Pr, ModeNo, ModeIdIndex);
1486 unsigned short memorysize = ((modeflag & MemoryInfoFlag) >> MemorySizeShift) + 1;
1487
1488 if(!AdapterMemSize) return true;
1489
1490 if(AdapterMemSize < memorysize) return false;
1491 return true;
1492}
1493
1494/*********************************************/
1495/* HELPER: Get DRAM type */
1496/*********************************************/
1497
1498#ifdef CONFIG_FB_SIS_315
1499static unsigned char
1500SiS_Get310DRAMType(struct SiS_Private *SiS_Pr)
1501{
1502 unsigned char data;
1503
1504 if((*SiS_Pr->pSiS_SoftSetting) & SoftDRAMType) {
1505 data = (*SiS_Pr->pSiS_SoftSetting) & 0x03;
1506 } else {
1507 if(SiS_Pr->ChipType >= XGI_20) {
1508 /* Do I need this? SR17 seems to be zero anyway... */
1509 data = 0;
1510 } else if(SiS_Pr->ChipType >= SIS_340) {
1511 /* TODO */
1512 data = 0;
1513 } if(SiS_Pr->ChipType >= SIS_661) {
1514 if(SiS_Pr->SiS_ROMNew) {
1515 data = ((SiS_GetReg(SiS_Pr->SiS_P3d4,0x78) & 0xc0) >> 6);
1516 } else {
1517 data = SiS_GetReg(SiS_Pr->SiS_P3d4,0x78) & 0x07;
1518 }
1519 } else if(IS_SIS550650740) {
1520 data = SiS_GetReg(SiS_Pr->SiS_P3c4,0x13) & 0x07;
1521 } else { /* 315, 330 */
1522 data = SiS_GetReg(SiS_Pr->SiS_P3c4,0x3a) & 0x03;
1523 if(SiS_Pr->ChipType == SIS_330) {
1524 if(data > 1) {
1525 switch(SiS_GetReg(SiS_Pr->SiS_P3d4,0x5f) & 0x30) {
1526 case 0x00: data = 1; break;
1527 case 0x10: data = 3; break;
1528 case 0x20: data = 3; break;
1529 case 0x30: data = 2; break;
1530 }
1531 } else {
1532 data = 0;
1533 }
1534 }
1535 }
1536 }
1537
1538 return data;
1539}
1540
1541static unsigned short
1542SiS_GetMCLK(struct SiS_Private *SiS_Pr)
1543{
1544 unsigned char *ROMAddr = SiS_Pr->VirtualRomBase;
1545 unsigned short index;
1546
1547 index = SiS_Get310DRAMType(SiS_Pr);
1548 if(SiS_Pr->ChipType >= SIS_661) {
1549 if(SiS_Pr->SiS_ROMNew) {
1550 return((unsigned short)(SISGETROMW((0x90 + (index * 5) + 3))));
1551 }
1552 return(SiS_Pr->SiS_MCLKData_0[index].CLOCK);
1553 } else if(index >= 4) {
1554 return(SiS_Pr->SiS_MCLKData_1[index - 4].CLOCK);
1555 } else {
1556 return(SiS_Pr->SiS_MCLKData_0[index].CLOCK);
1557 }
1558}
1559#endif
1560
1561/*********************************************/
1562/* HELPER: ClearBuffer */
1563/*********************************************/
1564
1565static void
1566SiS_ClearBuffer(struct SiS_Private *SiS_Pr, unsigned short ModeNo)
1567{
1568 unsigned char SISIOMEMTYPE *memaddr = SiS_Pr->VideoMemoryAddress;
1569 unsigned int memsize = SiS_Pr->VideoMemorySize;
1570 unsigned short SISIOMEMTYPE *pBuffer;
1571 int i;
1572
1573 if(!memaddr || !memsize) return;
1574
1575 if(SiS_Pr->SiS_ModeType >= ModeEGA) {
1576 if(ModeNo > 0x13) {
1577 memset_io(memaddr, 0, memsize);
1578 } else {
1579 pBuffer = (unsigned short SISIOMEMTYPE *)memaddr;
1580 for(i = 0; i < 0x4000; i++) writew(0x0000, &pBuffer[i]);
1581 }
1582 } else if(SiS_Pr->SiS_ModeType < ModeCGA) {
1583 pBuffer = (unsigned short SISIOMEMTYPE *)memaddr;
1584 for(i = 0; i < 0x4000; i++) writew(0x0720, &pBuffer[i]);
1585 } else {
1586 memset_io(memaddr, 0, 0x8000);
1587 }
1588}
1589
1590/*********************************************/
1591/* HELPER: SearchModeID */
1592/*********************************************/
1593
1594bool
1595SiS_SearchModeID(struct SiS_Private *SiS_Pr, unsigned short *ModeNo,
1596 unsigned short *ModeIdIndex)
1597{
1598 unsigned char VGAINFO = SiS_Pr->SiS_VGAINFO;
1599
1600 if((*ModeNo) <= 0x13) {
1601
1602 if((*ModeNo) <= 0x05) (*ModeNo) |= 0x01;
1603
1604 for((*ModeIdIndex) = 0; ;(*ModeIdIndex)++) {
1605 if(SiS_Pr->SiS_SModeIDTable[(*ModeIdIndex)].St_ModeID == (*ModeNo)) break;
1606 if(SiS_Pr->SiS_SModeIDTable[(*ModeIdIndex)].St_ModeID == 0xFF) return false;
1607 }
1608
1609 if((*ModeNo) == 0x07) {
1610 if(VGAINFO & 0x10) (*ModeIdIndex)++; /* 400 lines */
1611 /* else 350 lines */
1612 }
1613 if((*ModeNo) <= 0x03) {
1614 if(!(VGAINFO & 0x80)) (*ModeIdIndex)++;
1615 if(VGAINFO & 0x10) (*ModeIdIndex)++; /* 400 lines */
1616 /* else 350 lines */
1617 }
1618 /* else 200 lines */
1619
1620 } else {
1621
1622 for((*ModeIdIndex) = 0; ;(*ModeIdIndex)++) {
1623 if(SiS_Pr->SiS_EModeIDTable[(*ModeIdIndex)].Ext_ModeID == (*ModeNo)) break;
1624 if(SiS_Pr->SiS_EModeIDTable[(*ModeIdIndex)].Ext_ModeID == 0xFF) return false;
1625 }
1626
1627 }
1628 return true;
1629}
1630
1631/*********************************************/
1632/* HELPER: GetModePtr */
1633/*********************************************/
1634
1635unsigned short
1636SiS_GetModePtr(struct SiS_Private *SiS_Pr, unsigned short ModeNo, unsigned short ModeIdIndex)
1637{
1638 unsigned short index;
1639
1640 if(ModeNo <= 0x13) {
1641 index = SiS_Pr->SiS_SModeIDTable[ModeIdIndex].St_StTableIndex;
1642 } else {
1643 if(SiS_Pr->SiS_ModeType <= ModeEGA) index = 0x1B;
1644 else index = 0x0F;
1645 }
1646 return index;
1647}
1648
1649/****************************************…
Large files files are truncated, but you can click here to view the full file
|
__label__pos
| 0.945024 |
Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.
Patents
1. Advanced Patent Search
Publication numberUS7899188 B2
Publication typeGrant
Application numberUS 11/756,371
Publication dateMar 1, 2011
Filing dateMay 31, 2007
Priority dateMay 31, 2007
Fee statusPaid
Also published asUS20080298579
Publication number11756371, 756371, US 7899188 B2, US 7899188B2, US-B2-7899188, US7899188 B2, US7899188B2
InventorsHosame H. Abu-Amara
Original AssigneeMotorola Mobility, Inc.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and system to authenticate a peer in a peer-to-peer network
US 7899188 B2
Abstract
A system (100) and method (500) system to authenticate a peer in a peer-to-peer network is provided. The system can include a first peer (110) to locally create a secret key (112) and use the secret key to produce a public-key pair (120) comprising an identifier name (113) and a small public-key (115), and a second peer (160) to locally authenticate the identifier name of the public-key pair by requesting (405) the first peer to produce a unique dataset that does not reveal the secret-key and yet validates that the public-key pair was generated with the secret-key when the large public-key is applied to a portion of the unique dataset without using an external authentication system.
Images(5)
Previous page
Next page
Claims(20)
1. A method to generate a public-key pair for an identifier-name i of a peer, comprising:
choosing a secret-key, s;
applying a modulus operator to the secret-key to produce a large public-key, v;
performing a hash of the large public-key to produce a small public-key, hv;
combining the identifier name with the small public-key to produce a public-key pair <i, hv>; and
sharing the public-key pair in a peer-to-peer network;
wherein the steps of choosing, applying, performing, and combining occur locally in the peer disassociated with an external authentication system.
2. The method of claim 1, wherein the modulus operator is v=s2 mod n, where n is a value that is shared among peers in the peer-to-peer network.
3. The method of claim 2, wherein n is a product (n1*n2) of two prime numbers.
4. The method of claim 2, wherein s and n are both the same length, and the hash of v produces a small public-key hv that has a length less than s and n.
5. A method to authenticate an identifier-name of a first peer in a peer-to-peer network by a second peer, the method comprising:
receiving a public-key pair <i, hv> comprising the identifier name i and a small public-key hv locally created by the first peer using a secret key that is held in confidence by the first peer; and
locally authenticating the identifier name i by requesting the first peer to produce a unique data set, and comparing the unique data set to a second unique data generated by the second peer from a portion of the produced unique data set using the modulus operator v, wherein the authenticating is disassociated with an external authentication system.
6. The method of claim 5, wherein the method further comprises the steps of:
choosing a secret-key, s;
applying a modulus operator [s2 mod n] to the secret-key to produce a large public-key, v;
performing a hash of the large public-key to produce the small public-key, hv=hash [s2 mod n]; and
combining the identifier name with the small public-key to produce the public-key pair <i, hv>,
wherein n is a value that is known to the first peer and the second peer and is a product of two prime numbers,
wherein the steps of choosing, applying, performing, and combining occur locally disassociated with an external authentication system.
7. The method of claim 5, wherein the method further comprises the steps of:
receiving from the first peer a unique data set [(ri)2 mod n] using random numbers r1, r2, . . . , ri for i=1 to k;
informing the first peer of a first selected subset r′i and a second selected subset r″i of the unique data set;
receiving from the first peer a first reply xi=([s*ri] mod n) for each ri2 mod n of the first selected subset, r′i;
receiving from the first peer a second reply yi=(ri mod n) for each ri2 mod n of the second selected subset, r″i; and
validating the identifier name if ([v*r′i2] mod n) equals xi2 mod n, and if (r″i2 mod n) equals yi2 mod n.
8. The method of claim 5, wherein the external authentication system is at least one among a Public-key Infrastructure (PKI) that issues PKI certificates, a remote authentication server that digitally verifies signatures, or a remote log-in server that ensures a uniqueness of an identifier.
9. A system to authenticate a peer in a peer-to-peer network, the system comprising:
a first peer to locally create a secret key, s, and use the secret key to produce a public-key pair <i, hv> comprising an identifier name, i, and a small public-key, hv; and
a second peer to locally authenticate the identifier name of the public-key pair by requesting the first peer to produce a unique dataset that does not reveal the secret-key and yet validates that the public-key pair was generated with the secret-key when corresponding large public-key is applied to a portion of the unique dataset without using an external authentication system.
10. The system of claim 9, wherein the second peer
requests the first peer to generate the unique data from a sequence of random numbers;
selects a first subset and a second subset of the unique data set responsive to receiving the unique data set from the first peer; and
informs the first peer of the first subset selected and the second subset selected.
11. The system of claim 10, wherein the second peer receives the unique data set, selects a first subset and a second subset of the unique data set, and informs the first peer of the first and second subset selected.
12. The system of claim 11, wherein the first peer sends (s*ri) mod (n) to the second peer for each ri2 of the first subset as a first reply, and the first peer sends (ri) mod (n) to the second peer for each ri2 of the second subset as a second reply.
13. The system of claim 12, wherein the second peer validates the identifier-name if a square of the first reply is equal to (v*ri) mod (n) for each ri2 in the first reply, and a square of the second reply is equal to ri2 mod (n) for each ri2 in the second reply.
14. The system of claim 10, wherein the second peer
squares a first reply to produce a first squared reply responsive to the first peer processing the first subset with the secret-key to produce the first reply; and
squares a second reply to produce a second squared reply responsive to first peer processing the second subset without the secret-key to produce the second reply.
15. The system of claim 14, wherein the second peer
processes the first subset of the unique data set with the large public-key to produce a first reference, and processes the second subset of the unique data set without the large public-key to produce a second reference; and
validates the identifier-name of the first peer if the first squared reply equals the first reference and the second squared reply equals the second reference.
16. The system of claim 9, wherein the first peer
applies a modulus operator to the secret key, s, to produce a large public-key, v;
performs a hash of the large public-key to produce the small public-key, hv; and
combines the identifier name with the small public-key to produce the public-key pair <i, hv>,
wherein the steps of choosing, applying, performing, and combining occur locally in the peer without use of an external authentication system.
17. The system of claim 16, wherein the modulus operator is v=s2 mod n.
18. The system of claim 17, wherein n is a product (n1*n2) of two primes, where n is preconfigured in the first peer, and n is shared with the second peer.
19. The system of claim 9, wherein the second peer requests the first peer to send a large public-key and checks if a hash of the large public-key matches the short public-key.
20. The system of claim 9, wherein the first peer generates k random numbers, r1, r2, . . . , ri for i=1 to k, and sends the unique data set (ri2) mod n to the second peer.
Description
FIELD OF THE INVENTION
The present invention relates to computer and telecommunication network security, and, more particularly, to mobile devices and, more particularly, to a method and system for authenticating a peer in a peer-to-peer network.
BACKGROUND
In a peer-to-peer network, a peer can identify himself or herself by an identifier name. The identifier name allows other peers to contact the peer, and it also allows the other peers to know with whom they are communicating. However, in a large peer-to-peer network there can be multiple peers that use the same identifier name. In such cases, it can be difficult to ensure that identifier name in the peer-to-peer network is unique to the peer. Moreover, when exchanging secure information and sharing proprietary data among peers, it is important to validate with whom the peer is communicating.
SUMMARY
In one embodiment of the present disclosure, a method to generate a public-key pair for an identifier-name i of a peer is provided. The method can include choosing a secret-key, s, applying a modulus operator to the secret-key to produce a large public-key, v, performing a hash of the large public-key to produce a small public-key, hv, combining the identifier name with the small public-key to produce a public-key pair <i, hv>, and sharing the public-key pair in a peer-to-peer network. The steps of choosing, applying, performing, and combining can occur locally in the peer without use of or in disassociation with an external authentication system. The modulus operator can be v=s2 mod n, where n is a value that is commonly shared among peers in the peer-to-peer network. The term n can be preconfigured in the peer. The term n can be a product (n1*n2) of two prime numbers, where an administrator of the peer-to-peer network chooses the two prime numbers and then keeps secret from all peers in the network. In one arrangement, s and n are both the same length, and the hash of v produces a small public-key hv that has a length less than s and n.
In a second embodiment of the present disclosure, a method to authenticate an identifier-name of a first peer in a peer-to-per network by a second peer is provided. The method can include receiving a public-key pair <i, hv> comprising the identifier name i and a small public-key hv locally created by the first peer using a secret key that is held in confidence by the first peer, and locally authenticating the identifier name i by requesting the first peer to produce a unique data set, and comparing the unique data set to a second unique data generated by the second peer from a portion of the produced unique data set using the modulus operator v, wherein the authenticating is disassociated with an external authentication system.
The method can include choosing a secret-key, s, applying a modulus operator [s2 mod n] to the secret-key to produce a large public-key, v, performing a hash of the large public-key to produce the small public-key, hv=hash [s2 mod n], and combining the identifier name with the small public-key to produce the public-key pair <i, hv>, where n is a value that is known to the first peer (Peer 1) and the second peer (Peer 2) and is a product of two prime numbers, and the steps of choosing, applying, performing, and combining occur locally without use of an external authentication system. The method can include receiving from Peer 1 a unique data set [(ri)2 mod n] using random numbers r1, r2, . . . , ri for i=1 to k, where k is a parameter chosen by Peer 2, informing Peer 1 of a first selected subset r′i and a second selected subset r″i of the unique data set, receiving from Peer 1 a first reply xi=([s*ri] mod n) for each ri2 mod n of the first selected subset, ri, receiving from Peer 1 a second reply yi=(ri mod n) for each ri2 mod n of the second selected subset, r″i, and validating the identifier name if ([v*r′i2] mod n) equals xi2 mod n, and if (r″i2 mod n) equals yi2 mod n. The method can authenticate a peer without accessing a Public-key Infrastructure (PKI) that issues PKI certificates, a remote authentication server that digitally verifies signatures, or a remote log-in server that ensures a uniqueness of an identifier.
In a third embodiment of the present disclosure, a system to authenticate a peer in a peer-to-peer network is provided. The system can include a first peer to locally create a secret key, s, and use the secret key to produce a public-key pair <i, hv> comprising an identifier name, i, and a small public-key, hv, and a second peer to locally authenticate the identifier name of the public-key pair by requesting the first peer to produce a unique dataset that does not reveal the secret-key and yet validates that the public-key pair was generated with the secret-key when the large public-key is applied to a portion of the unique dataset without using an external authentication system.
The second peer can request the first peer to generate the unique data from a sequence of random numbers, select a first subset and a second subset of the unique data set responsive to receiving the unique data set from the first peer, and inform the first peer of the first subset selected and the second subset selected. Upon the first peer processing the first subset with the secret-key to produce a first reply, the second peer can square the first reply to produce a first squared reply. Upon the first peer processing the second subset without the secret-key to produce a second reply, the second peer can square the second reply to produce a second squared reply. The second peer can then process the first subset of the unique data set with the large public-key to produce a first reference, process the second subset of the unique data set without the large public-key to produce a second reference, and validate the identifier-name of the first peer if the first squared reply equals the first reference and the second squared reply equals the second reference.
The first peer can apply a modulus operator to the secret key, s, to produce a large public-key, v, perform a hash of the large public-key to produce the small public-key, hv, and combine the identifier name with the small public-key to produce the public-key pair <i, hv>, wherein the steps of choosing, applying, performing, and combining occur locally in the peer without use of an external authentication system. The modulus operator can be v=s2 mod n, where n is preconfigured in Peer 1, and n is shared with Peer 2. The term n is a product (n1*n2) of two primes, where an administrator of the peer-to-peer network chooses the two prime numbers and then keeps the two prime numbers secret from all peers in the network. In a first level of authentication, Peer 2 can request Peer 1 to send a large public-key and check if a hash of the large public-key matches the short public-key. In a second level of authentication, Peer 2 can request Peer 1 to generate k random numbers, r1, r2, . . . , ri for i=1 to k, and send the unique data set (ri2) mod n to Peer 2. Peer 2 can receive the unique data set, select a first subset and a second subset of the unique data set, and inform Peer 1 of the first and second subset selected. In response, Peer 1 can send (s*ri) mod (n) to Peer 2 for each ri2 of the first subset as a first reply, and also send (ri) mod (n) to Peer 2 for each ri2 of the second subset as a second reply. Peer 2 can validate the identifier-name if a square of the first reply is equal to (v*ri2) mod (n) for each ri2 in the first reply, and a square of the second reply is equal to ri2 mod (n) for each ri2 in the second reply.
BRIEF DESCRIPTION OF THE DRAWINGS
The features of the system, which are believed to be novel, are set forth with particularity in the appended claims. The embodiments herein, can be understood by reference to the following description, taken in conjunction with the accompanying drawings, in the several figures of which like reference numerals identify like elements, and in which:
FIG. 1 depicts an exemplary Peer to Peer (P2P) network in accordance with an embodiment of the present invention;
FIG. 2 depicts an exemplary method operating in the P2P network in accordance with an embodiment of the present invention;
FIG. 3 depicts a portion of the method 200 in accordance with an embodiment of the present invention;
FIG. 4 depicts an exemplary flowchart for peer authentication in the P2P network in accordance with an embodiment of the present invention; and
FIG. 5 depicts an exemplary method for peer authentication in the P2P network in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION
While the specification concludes with claims defining the features of the embodiments of the invention that are regarded as novel, it is believed that the method, system, and other embodiments will be better understood from a consideration of the following description in conjunction with the drawing figures, in which like reference numerals are carried forward.
As required, detailed embodiments of the present method and system are disclosed herein. However, it is to be understood that the disclosed embodiments are merely exemplary, which can be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the embodiments of the present invention in virtually any appropriately detailed structure. Further, the terms and phrases used herein are not intended to be limiting but rather to provide an understandable description of the embodiment herein.
The terms “a” or “an,” as used herein, are defined as one or more than one. The term “plurality,” as used herein, is defined as two or more than two. The term “another,” as used herein, is defined as at least a second or more. The terms “including” and/or “having,” as used herein, are defined as comprising (i.e., open language). The term “coupled,” as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. The term “processor” can be defined as number of suitable processors, controllers, units, or the like that carry out a pre-programmed or programmed set of instructions. The terms “program,” “software application,” and the like as used herein, are defined as a sequence of instructions designed for execution on a computer system. Further note, the term “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
Referring to FIG. 1, an exemplary peer-to-peer (P2P) network 100 for peer authentication is shown. The P2P network 100 can include a first peer 110 and a second peer 160 that communicate with one another, or other peers, for example using one or more wired (e.g. Ethernet, cable, PSTN etc.) or wireless technologies (e.g. CDMA, OFDM, IEEE 802.x, WiMAX, WiFi, etc). The P2P network 100 can include more than the number of peers shown, which may be present in various device forms. For example, a peer can be a cell phone, personal digital assistant, desktop personal computer, laptop, portable music player, or any other suitable communication device. Each peer in the P2P network can have an identifier name. For example, peer 110 can have an identifier name 113 that other peers in P2P network 100, such as peer 160, use to contact peer 110. Peers in the P2P network 100 can also assume multiple identifier names, or aliases, which can be used for different applications (e.g. friend list, business contact, game player name, etc.). The identifier name 113 can be combined with a public identifier (e.g. identifier_public 115) to generate a unique public-key pair 120. The public-key pair 120 can ensure that each peer in the P2P network can be distinguished from other peers.
In the P2P network, and in accordance with the embodiments herein presented, a peer can authenticate the identifier name of another peer using local authentication. More specifically, as shown in FIG. 1, peer 160 can authenticate an identity of peer 110 directly in a peer to peer arrangement without consulting or utilizing an external authorization system, such as Public-key Infrastructure (PKI) system. That is, peers can authenticate one another directly without accessing an authorization server.
Broadly stated, peer 160 can authenticate the identifier name 113 of peer 110 by a series of steps which validates that peer 110 holds the secret key 112 used to generate the public-key pair 120. As part of the authentication, peer 160 challenges peer 110 to use the secret key 112 to compute a value. Though peer 160 does not know the secret-key 112, nor can peer 160 directly verify the value computed by peer 110, peer 160 can use the value computed by peer 110 with a hash available to both peer 110 and peer 160 to validate the public-key pair 120 presented by peer 1, thereby confirming that peer 110 holds the secret-key 112 and is the identified peer claimed in the public-key pair 120. More specifically, peer 160 requests peer 110 to encrypt a series of randomly chosen values that do not reveal peer 110's secret-key 112, but yet verify the identify of peer 110, as will be discussed ahead.
The local authentication assumes that all peers in the P2P network 100 have a preconfigured public-key modulus. The public-key modulus, n, is a product of two prime numbers chosen by an administrator of the peer-to-peer network, where the two prime numbers are kept secret by the administrator and not revealed to the peers in the network. In one embodiment, all the public-key moduli in the peers have the same value n. For example, peer 110 can include a public-key modulus 111 n, and peer 160 can also include the same public-key modulus 161 n. When n is at least 1023 bits in length, there is no known method to compromise the cryptographic operations we discuss in this disclosure. In another embodiment, the public-key moduli in the peers have different values. For example, peer 110 can include a public-key modulus 111 that is different from the public-key modulus 161 in peer 160. As long as all public-key moduli are at least 1023 bits in length, there is no known method to compromise the cryptographic operations we discuss in this disclosure. It is not necessary for the public-key moduli to be kept secret; only the primes whose product produces the public-key moduli can be kept secret by the entity that creates the public-key moduli, for example the administrator of the peer-to-peer network. For the embodiment in which the public-key moduli in the peers have different values, it would be necessary for peers to inform each other of the public-key moduli they use. This can be done by the peers sending their public-key moduli directly to other peers, or can be done by attaching the public-key moduli they use to the identifier_name 113.
Each peer in the P2P network can create a secret key, s, 112 that is held in confidence by the peer. The secret key 112 can be used to generate the public-key pair 120 that is publicly shared with other peers in the P2P network. The public-key pair 120 can be used by other peers in the network to validate an identify of the peer corresponding to the public-key pair 120. For example, peer 110 can locally create the identifier name 113 (e.g. also called identifier_name) and locally create the secret key 112 (e.g. also called identifier_secret). From the secret key 112, peer 110 can locally generate a large public identifier 114, v, that is then cryptographically hashed to produce a small public-key 115 (e.g. also called identifier_public, hv). Peer 110 then combines the identifier_name 113, i, with the small public-key 115, hv, to produce the unique public-key pair <i,hv> 120. The public-key pair 120 is made public to peers such as peer 160 in the P2P network, which can then verify the identify of peer 110 in accordance with the methods herein set forth. In the current illustration, peer 160 authenticates peer 110 from the public-key pair 120 by requesting the peer 110 to produce a unique data set using the secret key 112, and comparing the unique data set to a second unique data generated by peer 160 from a portion of the produced unique data set using the modulus operator, v 114. This allows peer 160 to verify the identifier name of the peer 110 without using an external authentication system
Referring to FIG. 2, an exemplary method 200 for locally creating the public-key pair 120 is shown. In the current description, exemplary method 200 is implemented by peer 1, though it can be performed by any other peers creating an identifier name. It should also be noted, that the exemplary method 200 can be performed each time peer 110 creates a new identifier name, such as an alias. This allows peer 110 to create multiple identifier names that can be authenticated for varying purposes. For example, peer 110 may have a first identifier name for a first peer group (e.g. friends), and a second identifier name for a second peer group (e.g. business). Upon completion of the exemplary method 200, each identifier name created by peer 110 can correspond to a unique public-key pair 120 that can be authenticated by another peer to validate an identity of the peer claiming the identifier name in the public-key pair. Briefly, FIG. 3 presents an exemplary depiction of the method steps of exemplary method 200.
At step 202, peer 110 chooses an arbitrary number that is called the identifier_secret 112 which is at least K bits in length. The identifier_secret is held in confidence by peer 110 and is not shared with any other peers; that is, it is secret and not revealed to other peers. The number of bits K equals the number of bits of n, wherein n is the public-key modulus 111 of the peer. For example, if the peers in the P2P network 100 use 1023 bits for n, then K is also 1023 bits in length.
At step 204, peer 110 applies a modulus operator to the identifier_secret 112 to produce identifier_public_large 114. The modulus operator is a mathematical operation defined as x2 mod n, wherein x is the identifier_secret 112 and n is the public-key modulus 111. More specifically identifier_public_large=(identifier_secret)2 mod n, where n corresponds to the value of the public-key modulus that is preconfigured in peer 110. Identifier_public_large 114 is a large public-key that can be used by other peers in the P2P network as a first level of authentication. It is large in the sense that identifier_public_large 114 requires a large number of bits to represent.
At step 206, peer 110 performs a cryptographic hash of the identifier_public_large 114 to produce identifier_public 115. The cryptographic hash is a hashing function that maps the larger value associated identifier_public_large 114 to a smaller value associated with identifier_public 115. Examples of cryptographic hashes include the well-known hash algorithms HMAC and SHA-1. The smaller public-key 115 can be more efficiently communicated in the P2P network 100 than the larger public-key 114. The hashing function is a reproducible method of representing identifier_public_large 114 as identifier_public 115 which serves as a “fingerprint” to the larger identifier_public_large 114. In one embodiment, the hashing function can hash the 1023 bit identifier_public_large 114 (i.e. large public-key) to a 60 bit identifier_public 115 (i.e. small public-key), thereby reducing the amount of data required to represent the large public-key. Each consecutive group of 6 bits in the hash-value (i.e. identifier_public 115) can be represented by a single alphanumeric character (e.g. letter, character, digit) thus producing a small public-key (i.e. identifier_public 115) consisting of 10 characters (e.g. E678943T2U). In such regard, the identifier_public 115 can be more easily recognized than a 60 bit binary number.
At step 208, peer 110 reveals a public-key pair 120 <identifier_name, identifier_public> consisting of the chosen identifier_name 113 and the small public-key 115 (e.g. identifier_public). As an example, the identifier_name 113 can be a sequence of digits and letters (e.g. “Alice”) chosen by peer 1, and the identifier_public 115 can be the 10 character sequence (e.g. E678943T2U) produced from the hash. The public-key pair 120 can thus be represented by <Alice, E678943T2U> and shared amongst peers in the PTP network 100. For example, the public-key pair 120 can be presented by peer 110 when contacting peer 160 to allow peer 160 to validate the identity (e.g. identifier_name) of peer 110. In particular, peer 160 can use the public-key pair 120 as a first level of authenticating the identity of peer 1.
It should be noted that the exemplary method 200 can be practiced locally (e.g. on the device) at peer 110 without the use of an external authentication system (such as those using public-key infrastructure (PKI) techniques, where the keys to be communicated can be hundreds of bits in length). It should also be noted that the method steps 202 to 206 can be implemented locally by a processor (e.g. micro-controller, micro-processor, digital signal processor DSP, integrated circuit, etc. of peer 110) to generate identifier_name 113, identifier_public 115, and identifier_public_large 114. Peer 110 keeps the value of identifier_name 113, and the cryptographic values of identifier_secret 112, identifier_public_large 114, and identifier_public 115 local to the device, for example, by storing the values in a programmable memory.
Referring to FIG. 4, an exemplary flowchart 400 for authenticating a first peer in a peer-to-peer network by a second peer is shown. The flowchart 400 can be practiced with more or less than the number of steps shown, and is not limited to the order of the steps shown. To describe the flowchart 400, reference will be made to FIG. 1, although it is understood that the flowchart 400 can be implemented in any other manner using other suitable components. The exemplary flowchart 400 can start in a state wherein a first peer has created an identifier_name 113, identifier_secret 112, and a public-key pair 120 in accordance with the exemplary method 200 of FIG. 2.
At step 401, the first peer 110 associated with the public-key pair 120 contacts the second Peer 160. The public-key pair <i, hv> 120 includes the identifier name 113, i, claimed by peer 110 and the corresponding small public_key 115, hv. Recall the small public_key 115 hv was generated by peer 110 from the secret key, s, 112 held in confidence by peer 110 in accordance with the method steps 200 of FIGS. 2 and 3. The identifier_name 113 can be a user name that uses a combination of characters, letters, and digits (e.g. “Alice”). For example, peer 110 can contact peer 160 by sending a message request to initiate a data sharing session which includes the public-key pair 120 (e.g. <Alice, E678943T2U>) which identifies the peer 110 as “Alice” with the corresponding smaller public identifier key 115 (e.g. identifier_public). Notably, peer 110 keeps the larger identifier_public_large 114 (e.g. 1023 bit value) which requires more bandwidth to communicate than the smaller identifier_public 115 (e.g. 60 bit value), and which is made available on request.
Upon parsing the identifier name 113 from the public-key pair 120, the peer 160 can proceed to perform a first level of authentication. In particular, peer 160 can validate that the smaller received public-key 115 (e.g. identifier_public) corresponds to the larger public-key 114 (e.g. identifier_public_large) held by peer 1. To do so, peer 160 at step 402 requests peer 110 to send the larger public-key 114 (e.g. identifier_public_large). Upon receiving identifier_public_large 114 from peer 1, peer 160 proceeds to compute the cryptographic hash of identifier_public_large 114 as shown in step 403. The hash can be a hashing function such as a hash map or hash table that is available to all peers in the P2P network 100.
At step 404, peer 160 can determine if Hash[identifier_public_large] equals identifier_public 115. In particular, peer 160 can locally determine if the hash-value (e.g. 60 bit number) produced from the hashing of identifier_public_large 114 equals the value of the identifier_public 115 (e.g. 60 bit number) received earlier in the public-key pair 120. If the hash-value produced from the hashing operation does not match the public-key value 115 (e.g. identifier_public), peer 160 determines that the public-key pair 120 <i, hv> does not correspond to the identifier_name, i, 113 claimed in the public-key pair <i, hv>. Accordingly, peer 160 invalidates the identity of peer 110 at step 416.
If the hash-value produced from the hashing operation does match the public-key value 115 (e.g. identifier_public), peer 160 proceeds to a second level of authentication. At step 405, peer 160 requests peer 110 to generate a random dataset. In response, at step 406, peer 110 generates k random numbers, r1, r2, . . . , ri for i=1:k, where k is an integer that is preferably larger than 30. Peer 110 keeps that k random numbers secret. For each, ri, peer 110 at step 407 generates the unique data set (ri)2 mod n, where n is the public-key modulus 111. The unique data set contains k elements that are sent by peer 110 to peer 160.
Upon receiving the unique data set from peer 1, peer 160 selects a first subset and a second subset from the unique dataset. At step 408, peer 160 chooses a random subset of the received k values, and informs peer 110 at step 409 of the first subset and second subset selected. The subset identifies the random values selected for the particular subset. For example, peer 160 can select the indexes (e.g. 1-5, 7, 13-16, and 25-30) of the unique dataset for identifying the first subset (e.g. corresponding to random numbers r1-5, r7, r13-16, and r25-30) and the indexes (e.g. 6, 8-12, and 17-24) for identifying the second subset (e.g. corresponding to random numbers r6, r8-12, and r17-24) and report the indexes to peer 1. In response to peer 110 receiving from peer 160 the indication of the first subset and the second subset, peer 110 can perform a first modulus operation on the random values of the first subset, and a second modulus operation on the random values of the second subset.
The first modulus operation includes the secret-key 112. In particular, peer 110 computes (identifier_secret*ri,) mod (n) for each element (e.g. ri 2) of the first subset and sends the computed values as a first reply at step 410. Notably, peer 110 uses the secret-key 112 (e.g. identifier_secret) which is held in confidence by peer 110 as a multiplier term to the random value to produce the values of the first reply.
The second modulus operation does not include the secret key 112. In particular, peer 110 computes (ri 1) mod (n) for each element (e.g. ri 2) of the second subset and sends the computed values as a second reply at step 411. Notably, peer 110 does not include a multiplication of the secret-key 112, or apply a squaring function, to each random value of the second subset.
At step 412, peer 160 responsive to receiving the first reply and the second reply squares the first reply mod n to produce a first comparison set (e.g. ([s*r′i] mod n)2 mod n) and squares the second reply mod n to produce a second comparison set (e.g. (r″i mod n)2 mod n). At step 413, peer 160 checks that each element of the first comparison set is equal to each element received in the first subset of the unique dataset multiplied by the modulus operator, v, mod n (e.g. ([s*r′i] mod n)2 mod n==[(v) (ri)2] mod (n)). Notably, peer 160 incorporates the modulus operator, v, which uses the secret-key 112 (see FIG. 3). That is, the secret key 112 used by peer 110 to create the modulus operator 114, is constructively associated with the modulus operator 114, which is used by peer 160 as a first step to validate that peer 110 holds the secret-key 112.
At step 414, peer 160 checks that each element of the second comparison set is equal to each element received in the second subset of the unique dataset (e.g. (r″i mod n)2 mod (n)==(ri)2 mod (n)). Notably, peer 160 does not incorporate a multiplicative term associated with the secret-key 112. Since the first subset and the second subset were selected randomly from a unique dataset, the second comparison set must also match the second subset of the unique dataset. The second subset provides a second validation that peer 110 holds the secret-key 112.
If at step 415 the square of the first reply mod n is equal to [[(v) (ri)2] mod (n)] (i.e. step 413) and the square of the second reply mod n is equal to [(ri)2 mod (n)] (i.e. step 414), peer 160 validates the identifier_name, i, of peer 110 contained in the public-key pair <i, hv> as shown in step 417. If not, peer 160 invalidates the identifier_name of peer 110 at step 416. The flowchart 400 can be performed locally between peers for authenticating an identify of a peer. Upon a second peer authenticating a first peer, the peers can proceed to communicate.
Referring to FIG. 5, an exemplary method 500 that summarizes steps of flowchart 400 for authenticating an identify of a peer in the P2P network 100 is presented. Method 500 can be practiced with more or less than the number of steps shown, and is not limited to the order of the steps shown. Exemplary method 500 can begin in a state wherein a peer has created an identifier name.
At step 502, peer 110 creates the secret-key, s, 112 which is a number having a bit length K equal to a bit length of the public-key modulus n corresponding to peer 110. As an example, the secret key can be of length K=n=1023 bits. At step 504, peer 110 creates the public-key pair <i,hv> 120 that associates the identifier name, i, with a small public-key, hv. Peer 110 creates the small public-key 115 by performing a hash of a modulus operation of the secret key 112, hv=hash [s2 mod n], where n is the public-key modulus common to peers in the P2P network 100, and s2 mod n is the large public-key 114. As previously noted, n is the product of two large primes and can be 1023 bits. At step 506, peer 110 responsive to a challenge by peer 160 to validate the identify of peer 110 chooses random numbers r1, r2, r3, . . . , rk and sends all ri2 mod n to Peer 160. Peer 160 receives the random numbers and selects a first subset and second subset of the random numbers. Peer 160 then informs peer 110 of the random numbers selected in each subset, and at step 508, requests peer 110 to send ([s*ri] mod n) for the first subset, and (ri mod n) for the second subset. Peer 110 then responds with a first reply of (xi=[s*ri] mod n) and a second reply of (yi=ri mod n). At step 510, peer 160 verifies that a square of the first reply xi2 mod n is equal to [v*(ri)2] mod n in a first comparison (e.g. for xi=[s*ri] mod n, check xi2 mod n≡[v*(ri)2] mod n), and that a square of the second reply yi2 mod n is equal to (ri)2 mod n in a second comparison (e.g. for y=ri mod n, check yi2 mod n≡ri2 mod n). If the results of the first comparison are equal and the results of the second comparison are equal, peer 160 validates the identifier_name 113 of peer 110, thereby validating that peer 110 holds the secret-key 112 and that peer 110 is the identify presented in the identifier_name 113.
Upon reviewing the aforementioned embodiments, it would be evident to an artisan with ordinary skill in the art that said embodiments can be modified, reduced, or enhanced without departing from the scope and spirit of the claims described below. There are numerous configurations for peer to peer authentication that can be applied to the present disclosure without departing from the scope of the claims defined below. For example, the hash can include any number of bits greater or less than 60 bits. The hash can also depend on the number of active peers in the P2P network. Moreover, the method 500 can exclude the hash and produce a large public-key v=s2 mod n without a hashing operation to produce the larger public-key pair <i, v>, instead of <i, hv>. As another example, the methods of peer authentication discussed herein can be applied to signature verification. These are but a few examples of modifications that can be applied to the present disclosure without departing from the scope of the claims stated below. Accordingly, the reader is directed to the claims section for a fuller understanding of the breadth and scope of the present disclosure.
Those skilled in the art will recognize that the present invention has been described in terms of exemplary embodiments that can be based upon use of programmed processors to implement functions such as those described in method 200, flowchart 400, and method 500. However, the invention should not be so limited, since the present invention could be implemented using hardware component equivalents such as special purpose hardware and/or dedicated processors which are equivalents to the invention as described and claimed. Similarly, general purpose computers, microprocessor based computers, micro-controllers, optical computers, analog computers, dedicated processors and/or dedicated hard wired logic may be used to construct alternative equivalent embodiments of the present invention.
Those skilled in the art will appreciate that the program steps and associated data used to implement the embodiments described above can be implemented using any suitable electronic storage medium such as for example disc storage, Read Only Memory (ROM) devices, Random Access Memory (RAM) devices; optical storage elements, magnetic storage elements, magneto-optical storage elements, flash memory, core memory and/or other equivalent storage technologies without departing from the present invention. Such alternative storage devices should be considered equivalents.
The embodiments of the present invention, as described in embodiments herein, can be implemented using a programmed processor executing programming instructions that are broadly described above in flow chart form that can be stored on any suitable electronic storage medium (e.g., disc storage, optical storage, semiconductor storage, etc.) or transmitted over any suitable electronic communication medium. However, those skilled in the art will appreciate that the processes described above can be implemented in any number of variations and in many suitable programming languages without departing from the present invention.
While the invention has been described in conjunction with specific embodiments, it is evident that many alternatives, modifications, permutations and variations will become apparent to those of ordinary skill in the art in light of the foregoing description. Accordingly, it is intended that the present invention embrace all such alternatives, modifications, permutations and variations as fall within the scope of the appended claims.
Where applicable, the present embodiments of the invention can be realized in hardware, software or a combination of hardware and software. Any kind of computer system or other apparatus adapted for carrying out the methods described herein are suitable. A typical combination of hardware and software can be a mobile communications device with a computer program that, when being loaded and executed, can control the mobile communications device such that it carries out the methods described herein. Portions of the present method and system may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein and which when loaded in a computer system, is able to carry out these methods.
While the preferred embodiments of the invention have been illustrated and described, it will be clear that the embodiments of the invention are not so limited. Numerous modifications, changes, variations, substitutions and equivalents will occur to those skilled in the art without departing from the spirit and scope of the present embodiments of the invention as defined by the appended claims.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US6061794 *Sep 30, 1997May 9, 2000Compaq Computer Corp.System and method for performing secure device communications in a peer-to-peer bus architecture
US7065579 *Jan 22, 2002Jun 20, 2006Sun Microsystems, Inc.System using peer discovery and peer membership protocols for accessing peer-to-peer platform resources on a network
US7206934 *Sep 26, 2002Apr 17, 2007Sun Microsystems, Inc.Distributed indexing of identity information in a peer-to-peer network
US20030028484Jul 23, 2002Feb 6, 2003Cornelius BoylanMethod and devices for inter-terminal payments
US20030229789Jun 10, 2002Dec 11, 2003Morais Dinarte R.Secure key exchange with mutual authentication
US20040019786Oct 14, 2002Jan 29, 2004Zorn Glen W.Lightweight extensible authentication protocol password preprocessing
US20050149723Dec 30, 2003Jul 7, 2005Nokia, Inc.Method and system for authentication using infrastructureless certificates
US20050152305Aug 31, 2004Jul 14, 2005Fujitsu LimitedApparatus, method, and medium for self-organizing multi-hop wireless access networks
US20050152396Jan 14, 2005Jul 14, 2005Roman PichnaAd hoc networking of terminals aided by a cellular network
US20050153725Feb 3, 2005Jul 14, 2005Nokia CorporationMobile mesh Ad-Hoc networking
US20050154889Jan 8, 2004Jul 14, 2005International Business Machines CorporationMethod and system for a flexible lightweight public-key-based mechanism for the GSS protocol
US20050267991Jun 9, 2005Dec 1, 2005Microsoft CorporationPeer-to-peer name resolution protocol (PNRP) and multilevel cache for use therewith
Non-Patent Citations
Reference
1 *(Bradley) Peer-to-Peer (P2P) Networking Security; Four Steps to Sharing and Swapping Files Without Becoming a Victim; about.com; Tony Bradley; printed out in year 2010.
2 *(Couch) Peer-to-Peer File-Sharing Networks: Security Risks; William Couch; as published by SANS Institute in year 2002.
3 *(Li) A Survey of Peer-to-Peer Network Security Issues; James Li; as printed in 2010.
4 *(Mills) CNET News; Security; Mar. 6, 2009 11:00 AM PST; Can peer-to-peer coexist with network security?; Elinor Mills; published in year 2009.
5Menezes, A.J., "Handbook of Applied Cryptography" 1997, CRC Press, Boca Raton 228210, XP002503292, pp. 405-412.
6Patent Cooperation Treaty, "International Search Report and Written Opinion", ISA/EP, by Officer Liebhardt, Ingo, in PCT Application No. PCT/US2008/064833; Document of 12 pages dated Aug. 12, 2009.
7S. Cheshire, et al.; "Dynamic Configuration of IPv4 Link-Local Addresses"; Network Working Group, Request for Comments 3927, Standards Track; May 2005; pp. 1-34; http://tools.ietf.org/html/rfc3927.
8S. Thomson and T. Narten; "IPv6 Stateless Address Autoconfiguration"; Network Working Group, Request for Comments 2462, Standards Track; Dec. 1998; pp. 1-26; http://tools.ietf.org/html/rfc2462.
9Schneier, B., "Applied Cyrptography", Second Edition, Protocols, Algorithms, and Source Code in C, 1996, John Wiley & Sons, Inc., New York 218930, XPOO2503291, p. 30, line 15-p. 31, line 8 and p. 503, line 1-p. 507, line 5.
10Schneier, B., "Applied Cyrptography", Second Edition, Protocols, Algorithms, and Source Code in C, 1996, John Wiley & Sons, Inc., New York 218930, XPOO2503291, p. 30, line 15—p. 31, line 8 and p. 503, line 1—p. 507, line 5.
11T. Narten and R. Draves; "Privacy Extensions for Stateless Address Autoconfiguration in IPv6"; Network Working Group, Request for Comments 3041, Standards Track; Jan. 2001; pp. 1-18; http://tools.ietf.org/search/rfc3041.
12Vivek Haldar, "Zero Knowledge Cryptography", Mar. 8, 2002, 52 pages.
13Wikipedia, "Feige-Fiat-Shamir Identification Scheme", May 3, 2007, 2 pages article. Http://en.wikipedia.org/wiki/Feige-Fiat-Shamir-Identification-Scheme. Web site last visited May 31, 2007.
14Wikipedia, "ID-Based Cryptography", Jun. 22, 2006, 2 pages article. Http://en.wikipedia.org/wiki/ Cocks-Identity-Based-Encryption. Web site last visited Jul. 12, 2006.
15Wikipedia, "Zero-Knowledge Password Proof", May 31, 2007, 2 pages article. Http://en.wikipedia.org/wiki/Zero-knowledge-password-proof. Web site last visited May 31, 2007.
16Wikipedia, "Zero-Knowledge Proof", May 26, 2007, 6 pages article. Http://en.wikipedia.org/wiki/Zero-knowledge-proof. Web site last visited May 31, 2007.
17Wikipedia, "Feige-Fiat-Shamir Identification Scheme", May 3, 2007, 2 pages article. Http://en.wikipedia.org/wiki/Feige-Fiat-Shamir—Identification—Scheme. Web site last visited May 31, 2007.
18Wikipedia, "ID-Based Cryptography", Jun. 22, 2006, 2 pages article. Http://en.wikipedia.org/wiki/ Cocks—Identity—Based—Encryption. Web site last visited Jul. 12, 2006.
19Wikipedia, "Zero-Knowledge Password Proof", May 31, 2007, 2 pages article. Http://en.wikipedia.org/wiki/Zero-knowledge—password—proof. Web site last visited May 31, 2007.
20Wikipedia, "Zero-Knowledge Proof", May 26, 2007, 6 pages article. Http://en.wikipedia.org/wiki/Zero-knowledge—proof. Web site last visited May 31, 2007.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US8862872 *Sep 12, 2008Oct 14, 2014Qualcomm IncorporatedTicket-based spectrum authorization and access control
US8913995Apr 4, 2013Dec 16, 2014Qualcomm IncorporatedTicket-based configuration parameters validation
US9043596 *Jul 2, 2010May 26, 2015Samsung Electronics Co., Ltd.Method and apparatus for authenticating public key without authentication server
US9148335Sep 30, 2008Sep 29, 2015Qualcomm IncorporatedThird party validation of internet protocol addresses
US20100070760 *Sep 12, 2008Mar 18, 2010Qualcomm IncorporatedTicket-based spectrum authorization and access control
US20100083354 *Sep 30, 2008Apr 1, 2010Qualcomm IncorporatedThird party validation of internet protocol addresses
US20110191586 *Jul 2, 2010Aug 4, 2011Samsung Electronics Co., Ltd.Method and apparatus for authenticating public key without authentication server
WO2014092534A1Dec 5, 2013Jun 19, 2014Mimos BerhadA system and method for peer-to-peer entity authentication with nearest neighbours credential delegation
Classifications
U.S. Classification380/282, 380/278, 380/279
International ClassificationH04L9/08
Cooperative ClassificationH04L63/08, H04L2463/061, H04L9/006, H04L9/3263, H04L63/061, H04L9/3066
European ClassificationH04L9/32, H04L9/30, H04L63/06A
Legal Events
DateCodeEventDescription
May 31, 2007ASAssignment
Owner name: MOTOROLA, INC., ILLINOIS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ABU-AMARA, HOSAME H.;REEL/FRAME:019363/0234
Effective date: 20070530
Dec 13, 2010ASAssignment
Owner name: MOTOROLA MOBILITY, INC, ILLINOIS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA, INC;REEL/FRAME:025673/0558
Effective date: 20100731
Oct 2, 2012ASAssignment
Owner name: MOTOROLA MOBILITY LLC, ILLINOIS
Free format text: CHANGE OF NAME;ASSIGNOR:MOTOROLA MOBILITY, INC.;REEL/FRAME:029216/0282
Effective date: 20120622
Aug 25, 2014FPAYFee payment
Year of fee payment: 4
Nov 25, 2014ASAssignment
Owner name: GOOGLE TECHNOLOGY HOLDINGS LLC, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOTOROLA MOBILITY LLC;REEL/FRAME:034450/0001
Effective date: 20141028
|
__label__pos
| 0.590362 |
Menu Commands > File Menu > Open command
Navigation: Menu Commands > File Menu >
Open command (File menu)
JR Viewer
Print this Topic Previous pageReturn to chapter overviewNext page
Show/Hide Hidden Items
Allows you to open a DPlot file (containing data, strings, and formatting information), or one or more of nine other file formats.
Shortcut:
Click on the toolbar.
CTRL+O
A)DPlot file Contains data for one or more curves, as well as the title, axis labels, legend, and other formatting information. You may also select this option to open a compressed DPlot file.
B)ASCII file, one or more data sets (X,Y curves) with arbitrarily spaced points. The first line should have the number of points (N). Subsequent lines should contain each of the N X,Y data pairs. Data values may be separated by one or more spaces or a comma.
Example:
5
1. 1.
2. 4.
3. 9.
4. 16.
5. 25.
You can include multiple curves by repeating the pattern: The last X,Y pair for curve 1 is followed by N for curve 2, etc.
Example:
5
1.00,1.00
2.00,4.00
3.00,9.00
4.00,16.0
5.00,25.0
5
1.00,1.00
2.00,8.00
3.00,27.0
4.00,64.0
5.00,125.
5
1.00,1.00
2.00,16.0
3.00,81.0
4.00,256.
5.00,625.
C)Constant DX. The number of points (N) and spacing between data points (DX) should be on the same line, or DX should be on the line following N. DPlot will allow up to 3 header lines before the line containing N. If found, those lines will be used as the plot's title lines. The Y values may be on the same line separated by one or more spaces or a comma, or on separate lines. The first X value is assumed to be 0. (This can be amended by selecting 'Add a constant to X' from the 'Edit' menu.)
Example:
6
1.
0. 1. 4.
9. 16. 25.
D)Multiple column text files.
Multiple column text files (type D on the Select File Type dialog) may have columns delimited by commas (CSV files, for example), spaces, tabs, or semicolons. If the decimal symbol is set to a comma, columns should be separated by tabs or semicolons. Files may contain up to 30 header lines preceding the data columns (maximum number of header lines can be increased with the General command on the Options menu). DPlot looks for up to 20 successive lines that have the same number of data columns and the same data type in each column, and, if found, decides that this must be where the data starts. Data values may be separated by one or more spaces, a comma, or a tab. DPlot will read up to a maximum of 100 columns of data, restricted to 8192 characters per line.
Column Headings. If the data starts after line 1, DPlot attempts to get column headings from the previous line. Column headings are used as the legend for multiple curves, or the X and Y axis labels for a single curve. For this feature to work as expected with headings containing spaces, the columns must either be comma-separated, tab-separated, or delineated with double quotation marks. Otherwise DPlot will assume the labels are delineated with spaces.
Title Lines. Up to the first 3 lines in the file are used as title lines for the plot, unless these lines consist of numbers and/or spaces only.
Column Interpretation. If the file consists of a single column of data DPlot interprets the data as Y values, setting the corresponding X values to the index of the Y value in the file, starting from 0. By default, if the file consists of more than one column of data, DPlot uses the first column for the X array and subsequent columns as separate Y arrays. For multiple-column files, and/or for files consisting of alternating X,Y columns, you can change this default behavior by checking the Pick Columns to Plot box (see below) on the Open dialog box.
Example:
1.00 1.00 1.00 1.00
2.00 4.00 8.00 16.0
3.00 9.00 27.0 81.0
4.00 16.0 64.0 256.
5.00 25.0 125. 625.
Example (column headings):
This is the title line
X X^2 X^3 X^4
1 1 1 1
2 4 8 16
3 9 27 81
4 16 64 256
5 25 125 625
Example (column headings, comma-separated):
This is the title line
X,X^2,X^3,X^4
1,1,1,1
2,4,8,16
3,9,27,81
4,16,64,256
5,25,125,625
In addition to numbers DPlot will also accept columns consisting of dates, times, date-time pairs, currencies, and percentages. Date format is flexible but entries must be separated by a dash (-) or forward slash (/). If the month is specified as a number, then DPlot assumes the order is m/d/y (or d/m/y if Assume input dates are of the form d/m/y under the General command on the Options menu is checked) unless you use 4 digit years or if the assumed month entry is greater than 12. These date forms (for Jun 8, 2005) are acceptable:
6/8/05
06/08/2005
6-8-2005
2005/6/8
8-Jun-05
June-08-2005
but 'June 8, 2005' is not.
2-digit years less than 90 are interpreted as 21st century dates; 2-digit years greater than or equal to 90 are interpreted as 20th century dates. Of course, to avoid any ambiguity a 4-digit year is preferable.
Time values should be in the form h:m:s AM/PM, using ':' as the separator. Leading zeroes are acceptable. If the AM or PM designation is omitted, DPlot assumes a 24 hour clock.
Example (time values):
This is the title line
"Time","Temperature","Units"
15:36:50,59.0,C
15:37:00,59.2,C
15:37:10,59.4,C
15:37:20,59.7,C
15:37:30,62.2,C
15:38:40,61.8,C
Date-time pairs should be in the form 'm/d/yy h:m:s AM/PM'.
Only dollar signs and British pound signs are currently accepted as currency symbols. The currency symbol should precede the value, as in '$56.23'. If monetary values include a comma for the thousands separator, the entry must be surrounded by double quotation marks. This is the same scheme used by Excel when saving CSV files.
Percentages should be followed by a percent sign (%). DPlot divides the number by 100 and uses Percent number formatting on the associated axis when appropriate.
Any other data type will be ignored, but in general will be allowed. If present, blank entries are ignored. (For this feature to work as expected, columns must be tab-separated or comma-separated). Entries must not contain commas or tabs unless they are surrounded by double quotation marks. (Microsoft Excel generally surrounds values containing commas with double quotation marks when saving to a CSV file.)
Finally, the data rows may be preceded by an ID character string starting in column 1 that distinguishes this line as data. To make use of this feature check the “Data rows have ID string” box on the Open dialog, and enter the text (up to 8 characters) in the corresponding box. This string is preserved from one session to the next, and once you enter the appropriate string you will be able to drag-and-drop these files onto DPlot without the need to check this box each time.
Example (data row ID string):
This is the title line
X,X^2,X^3,X^4
# 1,1,1,1
# 2,4,8,16
# 3,9,27,81
# 4,16,64,256
# 5,25,125,625
6,7,8,9 This line will be ignored
As will all subsequent lines
If the title line(s) or column headings contain commas, the text to be used should start and end with a double quotation mark (").
Files saved by Microsoft Excel as "comma-separated values" can be read into DPlot using this option.
Campbell Scientific datalogger files
For the most part, Campbell Scientific datalogger files are handled identically to CSV and other multiple-column text files, with a couple of exceptions. DPlot determines that a multiple-column text file is a Campbell datalogger file if:
1) The first seven characters in the file are "T0A5", (including the quotes), and
2) The data is determined to start in line 5, and
3) Lines 2 through 4 contain two times as many quotation marks as there are data columns.
If all of the above tests are met, the file is considered a Campbell Scientific datalogger file. The only significant differences in file handling are:
1) The first title line is taken from the second quoted string in the first line (i.e. "T0A5" is ignored). The second title line is taken from the third quoted string in the first line.
2) Legend entries are taken from the second and third lines in the file. If the entries in the third line are not blank, a comma separator is added between the entry from the second line and the entry from the third line.
3) If the column heading for the second column in the second file line is "RECORD", this column is skipped. Normally the 0-based record number is superfluous. If you want the record number to be plotted, you should check the "Pick columns to plot" box.
Pick Columns to Plot
Allows you to specify which columns within a multiple-column file to plot. Up to 20 rows from the selected file are displayed in the read-only box on the left side of the dialog box. Each column is preceded by a heading with the column number as interpreted by DPlot. If the column numbers do not match up with what you expected, or if the displayed text does not start with the first data line, then DPlot had difficulty in determining either the number of columns or the start of the data, or both. For more information see the description of file type D in the Open Command Help topic.
By default, DPlot uses the first column of data in multi-column files for the X axis values. If there is only one column of data, DPlot starts X at 0 and increments by 1 for each additional row. Subsequent columns (or the first column in single-column files) are used for the Y values, each sharing the same X.
To change the column number used for the X axis values, enter a number in the Use column __ for X Axis box. This entry cannot be a text column. If this number is 0, then DPlot starts X at 0 and increments X by 1 for each additional row of data values.
Select columns to use for the Y values by checking the appropriate check boxes under Use these columns for Y. Note that you cannot select a column to serve as both the X Axis values and the Y values for a curve, nor can you select text columns (anything other than numbers, currencies, dates, times, or date-time pairs).
Alternatively, if the file contains alternating X,Y columns, as in:
X(1,1) Y(1,1) X(1,2) Y(1,2)
X(2,1) Y(2,1) X(2,2) Y(2,2)
X(3,1) Y(3,1) X(3,2) Y(3,2)
etc., then you should check the box labelled Alternating X,Y columns. In this case DPlot will disable the even-numbered (Y) columns. Checking an X column will automatically cause the corresponding Y column to be checked. Alternating X,Y columns cannot be selected if the total number of columns in the file is an odd number, or if the file contains any non-numeric columns.
If you are opening a file via a macro or programmatically (dplotlib.dll) using a FileOpen command and do not want the Specify Columns to Plot dialog to appear, instead using the default settings, use a ColumnsAre command. For example if your file consists of two columns (or 4, 6, 8, etc.) for an XY Plot and you do not want the Specify Columns to Plot dialog to appear, using [ColumnsAre(1)] before the FileOpen command.
Labels
For X,Y data files containing 3 (and only 3) columns or for 3D data files containing 4 columns, you may specify that the last column contains point labels. For example this data:
0.0, 0.000000000,"X=$X, Y=$Y"
0.1, 0.309016994,
0.2, 0.587785252,
0.3, 0.809016994,
0.4, 0.951056516,
0.5, 1.000000000,"X=$X, Y=$Y"
0.6, 0.951056516,
0.7, 0.809016994,
0.8, 0.587785252,
0.9, 0.309016994,
1.0, 0.000000000,"X=$X, Y=$Y"
1.1,-0.309016994,
1.2,-0.587785252,
1.3,-0.809016994,
1.4,-0.951056516,
1.5,-1.000000000,"X=$X, Y=$Y"
1.6,-0.951056516,
1.7,-0.809016994,
1.8,-0.587785252,
1.9,-0.309016994,
2.0, 0.000000000,"X=$X, Y=$Y"
will produce this plot:
E)Unformatted binary (32-bit). Unformatted data files are generally much smaller than their equivalent ASCII files, and read/write operations are generally an order of magnitude faster. DPlot writes the file as a series of 1024-byte records, with the 1st record containing only the number of data points (4 byte integers) for each curve, and subsequent records consisting of 128 x,y pairs each (with possibly fewer than 128 pairs in the last record for each curve). The data for the second and subsequent curves (if present) must start on a 1024-byte boundary.
Note that this method will always produce a file size of at least 2048 bytes for a single curve (or 1024*(1+number of curves)), regardless of how small the original data set is.
X and Y values are saved as 4-byte (32-bit) floating point values. If precision is important (for example your data has hundreds of thousands of points with a relatively small increment between points), consider using type N. See below.
Files saved by DPlot using this format will always fill the first 1024-byte record with 0s following the number of points for each curve, which should assist programmers in quickly determining how many curves are present in the file.
If you use the Lahey FORTRAN compiler and you want to read an unformatted file produced by DPlot, you should compile your program with the /D option, which tells the program not to expect the F77L header. Likewise, if your F77L program produces an unformatted file that you want to read with DPlot, the program should be compiled with the /D option.
Other FORTRAN compilers may expect different file formats for unformatted data than that described above.
F)Binary file produced by Pacific Data Model 9820 recorders, DNA/Bendix format. Records produced in DNA field tests generally follow this format. See your instrumentation personnel. For more information see the Bendix Format Files Help topic.
G)Binary file produced by Pacific Data recorders, OLD format (pre-1993). For more information see the Pacific Data Recorder Old Format Help topic.
H)Binary file produced by Pacific Data recorders, NEW format (post-1993). For more information see the Pacific Data Recorder New Format Help topic.
I)Nicolet Time Domain files (binary) created with the Nicolet System 400 Digital Oscilloscope. For more information see the Nicolet Waveform File Specification Format Help topic.
J)Hardened Data Acquisition System (HDAS) files. The HDAS is a self-contained transducer/recording system developed by Dr. Ray Franco. For more information see the HDAS File Format Help topic.
K)Random 3D points. ASCII text file containing randomly-spaced 3D points, one X,Y,Z triplet per line. Values may be separated by commas, spaces, or tabs. The file type will result in a new plot window being opened (if the currently active window has a plot). For surface plots, a convex triangular mesh is generated, and each triangle in the mesh is considered planar when drawing contour levels. Areas outside the triangular mesh are not interpolated and not drawn. To delete extraneous triangles see the How To topic "How do I force DPlot to create a triangular mesh of my 3D points that is not convex?". You can generate a smoother plot after reading one of these file with the Generate Mesh command on the Options menu. Also for surface plots, points with identical X,Y coordinates are removed, preserving the point with the maximum Z value. For 3D scatter plots, all points are preserved. For 3D scatter plots, this file format can contain multiple data sets. Data sets are separated by a single blank line. 3D scatter plots saved by DPlot as CSV files will use this same format.
This file format may optionally contain 3 title lines and X, Y, and Z axis labels preceding the data. If included, these lines should start and end with double-quotation marks. DPlot reads labels in the order 1st title line, 2nd title line, 3rd title line, X axis label, Y axis label, Z axis label. So if you want, for example, to include all axis labels but have only one title line, the 2nd and 3rd lines in the file should be pairs of double-quotation marks. Alternatively, if the X, Y, and Z axis labels are on the same line (such that there are 3 pairs of double quotation marks on this line), DPlot will interpret this line correctly regardless of whether any title lines precede it.
For data files containing 4 (and only 4) columns, you may specify that the last column contains point labels by checking the Pick Columns to Plot box. These labels will only be drawn in 2D views, not 3D views. Labels should be delineated with "double quotation marks".
If you are opening a file via a macro or programmatically (dplotlib.dll) using a FileOpen command and do not want the Specify Columns to Plot dialog to appear, instead using the default settings, use a ColumnsAre command. For example if your file consists of three columns of numbers with X, Y, and Z coming from columns 1, 2, and 3 respectively and you do not want the Specify Columns to Plot dialog to appear, using [ColumnsAre(3)] before the FileOpen command.
L)File import plugins. The associated listbox shows plugin modules that you elected to install when running the DPlot setup program. File import plugin modules distributed with DPlot are described in the File Import Plugin Modules topic. To develop your own plugin modules for DPlot, see the Plugins for File Import, Export and Data Manipulation topic.
M)1D Statistics. ASCII file containing one or more groups of amplitudes. These files produce a box-and-whisker plot or dot graph. The format for this file type is:
NumGroups
do i = 1,NumGroups
Label
NumPoints
do j = 1,NumPoints
Amplitude(j,I)
end do
end do
N)Unformatted binary (64-bit). Unformatted data files are generally much smaller than their equivalent ASCII files, and read/write operations are generally an order of magnitude faster. DPlot writes the file as a series of 1024-byte records, with the 1st record containing only the number of data points (4 byte integers) for each curve, and subsequent records consisting of 64 x,y pairs each (with possibly fewer than 64 pairs in the last record for each curve). The data for the second and subsequent curves (if present) must start on a 1024-byte boundary.
Note that this method will always produce a file size of at least 2048 bytes for a single curve (or 1024*(1+number of curves)), regardless of how small the original data set is.
X and Y values are saved as 8-byte (64-bit) floating point values. For a (usually) smaller file size but decreased precision, see type E above.
Files saved by DPlot using this format will always fill the first 1024-byte record with 0s following the number of points for each curve, which should assist programmers in quickly determining how many curves are present in the file.
If you use the Lahey FORTRAN compiler and you want to read an unformatted file produced by DPlot, you should compile your program with the /D option, which tells the program not to expect the F77L header. Likewise, if your F77L program produces an unformatted file that you want to read with DPlot, the program should be compiled with the /D option.
Other FORTRAN compilers may expect different file formats for unformatted data than that described above.
Array sizes
If the current maximum number of points/curve is less than the number required to read the file, DPlot will attempt to re-allocate the X,Y arrays using the power of 2 greater than or equal to the number of points found in the file, and the number of curves specified in the file. If the re-allocation fails, DPlot presents an error message and/or reads only the allowable number of points.
If, after successfully reading a file, the allocated number of points/curve is greater than twice the number required, DPlot re-allocates the X,Y arrays using the power of 2 greater than or equal to the maximum number of points for all curves.
Multiple file selection
For other than DPlot files, you may read multiple files into the same window. To select multiple files, hold down the CONTROL key while selecting each of the files to read, or press the SHIFT key to select a range of files.
File termination
For each of the supported ASCII file formats, the last line in the file should include a carriage return/line feed sequence.
Time scale
Input time values for binary files (file types F, G, H, I, and J) are automatically shifted to milliseconds, rather than seconds.
Related macro commands
ColumnsAre
FileOpen
ForFilesIn
UseNameAsLegend
Page url: https://www.dplot.com/help/index.htm?helpid_open.htm
|
__label__pos
| 0.760551 |
How to Clear Apt Cache in Ubuntu in 4 Easy Ways
Akshat
UX/UI Designer at - Adobe
Akshat is a software engineer, product designer and the co-founder of Scrutify. He's an experienced Linux professional and the senior editor of this blog. He...Read more
TL;DR
To clear Apt Cache in Ubuntu, try these four methods:
1. Use the sudo apt-get clean command to remove all downloaded package files from the Apt Cache.
2. Execute the sudo apt-get autoclean command in the Terminal window to remove unnecessary package files from your system.
3. Run the sudo apt-get autoremove command to remove old or unused packages from the Apt Cache location.
4. Manually clear Apt Cache by navigating to the cache directory using the cd command and remove all files using the sudo rm -rf * command.
Clearing Apt Cache helps free up disk space, improve system performance, prevent errors and failures, reduce security risks, and allow installation of new packages. To further prevent Apt Cache buildup, you should set up automatic cleaning using a cron job, use disk space analyzers, remove unnecessary packages, keep the system up to date, and increase disk space capacity.
To learn more about how to clear Apt Cache in Ubuntu, read the article below.
Regularly clearing Apt Cache in Ubuntu is crucial to maintain a well-performing system and avoid potential issues. Failing to do so can result in disk space problems that significantly slow down your system’s performance. Moreover, it can lead to failures in package installation and updates, leaving your system exposed to security threats.
In this article, I will provide you with a comprehensive guide on clearing Apt Cache in Ubuntu. This will let you free up valuable disk space and optimize your system’s efficiency. Additionally, you will gain insights into the importance of regular clearing Apt Cache and discover five practical tips for effectively managing your Apt Cache.
How to Clear Apt Cache in Ubuntu
To clear Apt Cache in Ubuntu, you can use commands such as apt-get clean, apt-get autoclean, and apt-get autoremove. Alternatively, you can manually clear the Apt Cache. Here are the detailed steps for each of these methods:
1. Use apt-get clean
The easiest way to clear Apt Cache in Ubuntu is by using the apt-get clean command. This command removes all the downloaded package files from the Apt Cache. Here’s how to do it:
1. Launch the Terminal on your system.
launch the terminal on your system
1. Type the following command and press Enter:
sudo apt-get clean
1. Enter your user password if prompted.
type following command and press enter
2. Use apt-get autoclean
Another way to clear Apt Cache in Ubuntu is by using the apt-get autoclean command. This command removes only the package files that are no longer needed. Here’s how to do it:
1. Navigate to the Terminal window via the Ubuntu App menu.
terminal window via the ubuntu app menu
1. Type and execute the following command in the Terminal window:
sudo apt-get autoclean
1. This will remove all the package files that are no longer needed from the Apt Cache, and you’ll get the following output:
remove all package files to clear apt cache in ubuntu
3. Use apt-get autoremove
If you have old or unused packages installed on your system, you can also use the apt-get autoremove command to remove them and their associated files from the system. Here’s how to do it:
1. Press the Ctrl + Alt + T keys to open the Terminal on your system.
2. Execute the following command:
sudo apt-get autoremove
1. This will remove all the old or unused packages and their associated files from the system, and you’ll see the following output:
remove all the old or unused packages
4. Manually Clear the Apt Cache
If you want to clear the Apt Cache manually, follow these simple steps to ensure the proper removal of unnecessary package files from your system:
1. Head to the Terminal app on your Linux system and run the command below to go to the Apt Cache directory:
cd /var/cache/apt/archives
1. Next, execute this command to list all files in the Apt Cache directory:
ls
1. After that, type the following command to remove all the files in the Apt Cache directory:
sudo rm -rf *
1. Once done, you’ll see the following output in your Terminal window:
command to remove all files to clear apt cache
Top 5 Reasons to Clear Apt Cache
After learning how to clear the Apt Cache in Ubuntu using Linux tools, it’s important to understand why it’s essential to keep it clear. So, here are five important reasons to do so:
• 🗑️ Free up disk space: One of the main reasons to keep Apt Cache clear is to free up disk space on your system. Over time, the cache can accumulate a large number of unnecessary files, taking up valuable storage space. Clearing the cache regularly can help you free up disk space and ensure that your system runs smoothly.
• 💨 Improve system performance: Clearing Apt Cache can also help improve system performance. If the cache becomes too large, it can slow down package installations and updates, causing your system to become sluggish. Clearing the cache can speed up the process and make your system run more efficiently.
• 🚫 Prevent errors and failures: If the Apt Cache becomes corrupted or contains outdated files, it can cause errors and failures during package installations and updates. By keeping the cache clear, you can prevent these issues and ensure that your system remains stable and reliable.
• 🔒 Reduce security risks: An old or corrupted Apt Cache can also pose security risks to your system. If the cache contains outdated or vulnerable packages, it can leave your system vulnerable to security threats. Clearing the cache regularly can help reduce these risks and keep your system secure.
• Install new packages: If your Apt Cache is full, you may not be able to install new packages or updates. Clearing the cache can free up space and allow you to install new packages without any issues.
5 Tips to Keep Apt Cache Clear
To keep Apt Cache clear in Ubuntu, implement proactive measures to maintain efficient system performance. Here are five valuable tips to help you prevent Apt Cache buildup and ensure optimal performance of your Ubuntu system:
1. 🧹 Set up automatic Apt Cache cleaning: Schedule a cron job to auto-clean the Apt Cache regularly, reducing manual effort and preventing buildup. You can use the command sudo crontab -e to edit the crontab file and add a line like 0 0 * * 0 sudo apt-get autoclean. This will run the sudo apt-get autoclean command automatically at midnight every Sunday without any manual intervention.
2. 💾 Use disk space analyzers: Tools like Baobab or Disk Usage Analyzer let you identify large files or directories occupying significant space. This also enables you to detect and address Apt Cache buildup. To install Baobab, run sudo apt-get install baobab in the Terminal. Then, launch this app by entering baobab in the Terminal or via the app menu. Now, scan your system’s storage to pinpoint areas of concern, take appropriate actions to free up space, and prevent Apt Cache problems.
3. 🗑️ Remove unnecessary packages: Removing packages that you no longer need can help reduce the size of your Apt Cache. Use the apt-get autoremove command to remove packages that were installed as dependencies but are no longer needed.
4. 🚀 Keep your system up to date: Keeping your system up to date ensures that you have the latest security patches and bug fixes. This can help prevent security vulnerabilities and reduce the risk of issues caused by an uncleared Apt Cache. To do so, execute sudo apt-get update && sudo apt-get upgrade to get the latest updates installed on your Linux system.
5. 💿 Increase your disk space: If you frequently encounter disk space issues, consider upgrading your storage capacity. This can provide you with more space to store packages and reduce the risk of issues caused by an uncleared Apt Cache.
Wrapping Up
In conclusion, regularly clearing the Apt Cache in Ubuntu can help maintain a clean and efficient system. By using any of the four methods described in this article – sudo apt-get clean, sudo apt-get autoclean, sudo apt-get autoremove, or manual clearing – you can free up disk space, improve performance, and reduce security risks. Additionally, setting up automatic cleaning, using disk space analyzers, removing unnecessary packages, keeping the system up to date, and increasing disk space capacity can help prevent Apt Cache buildup and maintain system health.
For more insights, consider reading my articles on optimizing Linux memory performance, managing apt packages in Ubuntu, and clearing Bash history to maintain a secure and stable Linux environment. These resources will further enhance your understanding and ability to manage and optimize your Linux system.
Frequently Asked Questions
What is Apt Cache?
Apt Cache is a storage area used by the Advanced Packaging Tool (APT) for managing software on Debian-based Linux systems like Ubuntu. It stores package files and metadata that APT retrieves during software installation, update, or upgrade processes. Typically located in the /var/cache/apt/archives directory, the cache can be helpful for reinstalling or downgrading packages without downloading them again. However, over time, it can accumulate and consume a significant amount of disk space. Regularly clearing the Apt Cache is essential for freeing up disk space, enhancing system performance, and reducing potential security risks.
How frequently should I clear Apt Cache in Ubuntu?
The frequency of clearing Apt Cache in Ubuntu depends on your system usage and available disk space. For users who frequently install, update, or remove packages, I recommend clearing the cache more often to prevent disk space issues and maintain system efficiency. On the other hand, users with a stable system and infrequent package installations might not need to clear the cache as often. As a general guideline, it’s a good idea to monitor your disk space usage and clear the cache when you notice significant accumulation or when disk space becomes limited. Setting up a periodic automatic cache cleaning, such as a weekly or monthly cron job, can also be a practical approach to keep the Apt Cache under control.
Is it okay to clear Apt Cache while I am installing or updating packages?
The Apt Cache stores package files temporarily during installations or updates. So, clearing the cache at that time might lead to incomplete or corrupt installations. It is best to wait until the installation or update process has been completed successfully before attempting to clear the Apt Cache. Always ensure that no package-related tasks are in progress when you decide to clear the cache to avoid complications and maintain system stability.
What are some other ways to clear Apt Cache in Ubuntu?
Yes, there are alternative methods to clear Apt Cache in Ubuntu. You can use the graphical user interface like the Synaptic Package Manager, the “autoremove” option of the Apt-get command, or the aptitude clean command. However, the most common and efficient method is still through the command line using the Apt-get command. It clears the Apt Cache with the apt-get clean command.
Why is clearing Apt Cache necessary?
Clearing the Apt Cache is necessary to maintain a clean and efficient system, as it helps free up valuable disk space and improve overall performance. The Apt Cache stores package files temporarily for software installations and updates on your system. Over time, this cache can grow significantly, taking up substantial storage space and potentially causing performance issues. Additionally, outdated packages in the cache may pose security risks due to known vulnerabilities. Regularly clearing the Apt Cache ensures that your system remains free of unneeded package files, enhancing stability, security, and available storage for more critical data and applications.
AkshatUX/UI Designer at - Adobe
Akshat is a software engineer, product designer and the co-founder of Scrutify. He's an experienced Linux professional and the senior editor of this blog. He is also an open-source contributor to many projects on Github and has written several technical guides on Linux. Apart from that, he’s also actively sharing his ideas and tutorials on Medium and Attirer. As the editor of this blog, Akshat brings his wealth of knowledge and experience to provide readers with valuable insights and advice on a wide range of Linux-related topics.
Total
0
Shares
Related Posts
|
__label__pos
| 0.996333 |
How To Write a Telegram circa 1928
At one time, telegrams were the primary means of high-speed, long-distance communications. They were the email of their day and, like any widely used service, customs and rules grew up around them. A 1928 booklet by Nelson E. Ross titled How To Write Telegrams Properly details some of those rules and conventions. Not solely about writing telegrams, it also serves as a historical snapshot of telegraph services in 1928. The many topics covered include:
• How to Save Words
• Tolls – How Computed
• Extra Words and Their Avoidance
• Collect Cards and Their Uses
• Messages for Persons on Trains
• How to Send Money by Telegraph
• “Telegraphic Shopping” Service
The range of telegraph services available at the time was quite impressive. I knew it was possible to send money by telegraph (think Western Union), but I never knew that telegram delivery was possible to a person on a train. Similarly, I had never heard of “Telegraphic Shopping,” seemingly a 1920′s e-commerce prototype providing a way to buy any “standardized article from a locomotive to a paper or pins.” I wonder how far that idea developed.
Another part of the booklet addresses punctuation in telegrams, specifically the use of the word “stop” as a substitute for a period. Telegrams depicted in movies always use “stop” heavily, yet each use would have meant an extra word charge. In real life (from what I understand), most people structured their telegrams so that the meaning was clear without any punctuation. But sometimes an extra word was necessary:
If it seems impossible to convey your meaning clearly without the use of punctuation, use may be made of the celebrated word “stop,” which is known the world over as the official telegraphic or cable word for “period.” This word “stop” may have perplexed you the first time you encountered it in a message. Use of this word in telegraphic communications was greatly increased during the World War, when the Government employed it widely as a precaution against having messages garbled or misunderstood, as a result of the misplacement or emission of the tiny dot or period.
This booklet is one of the many resources on The Telegraph Office, a site devoted to telegraphy, or as they put it “A Tribute to Morse Telegraphy and Resource for Wire and Wireless Telegraph Key Collectors and Historians.”
2 comments on “How To Write a Telegram circa 1928
• Evan Parry wrote:
I always cringe when I hear “stop” read out in telegrams on TV or in movies. This is a ridiculous idea as pointed out by TV Tropes. It’s as ridiculous as actors dropping their script pages during the recording of a radio show, like in the movie “Annie”.
Leave a Reply
Your email address will not be published. Required fields are marked *
You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
|
__label__pos
| 0.728969 |
Skip to content
Aurelia
Yii Framework Froala WYSIWYG Editor
Installation
The preferred way to install this extension is through composer.
Either run
php composer.phar require --prefer-dist froala/yii2-froala-editor
or add
"froala/yii2-froala-editor": "^2.6.0"
to the require section of your composer.json file.
Usage
Once the extension is installed, simply use it in your code by :
<?php echo froala\froalaeditor\FroalaEditorWidget::widget([
'name' => 'content',
'options' => [
// html attributes
'id'=>'content'
],
'clientOptions' => [
'toolbarInline'=> false,
'theme' =>'royal', //optional: dark, red, gray, royal
'language'=>'en_gb' // optional: ar, bs, cs, da, de, en_ca, en_gb, en_us ...
]
]); ?>
or use with a model:
<?php echo froala\froalaeditor\FroalaEditorWidget::widget([
'model' => $model,
'attribute' => 'content',
'options' => [
// html attributes
'id'=>'content'
],
'clientOptions' => [
'toolbarInline' => false,
'theme' => 'royal', //optional: dark, red, gray, royal
'language' => 'en_gb' // optional: ar, bs, cs, da, de, en_ca, en_gb, en_us ...
]
]); ?>
add Font-awesome cdn for font-awesome plugin php <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css">
Upload example
Using the basic Yii template make a new folder under /web/ called uploads.
For controler:
public function actionUpload() {
$base_path = Yii::getAlias('@app');
$web_path = Yii::getAlias('@web');
$model = new UploadForm();
if (Yii::$app->request->isPost) {
$model->file = UploadedFile::getInstanceByName('file');
if ($model->validate()) {
$model->file->saveAs($base_path . '/web/uploads/' . $model->file->baseName . '.' . $model->file->extension);
}
}
// Get file link
$res = [
'link' => $web_path . '/uploads/' . $model->file->baseName . '.' . $model->file->extension,
];
// Response data
Yii::$app->response->format = Yii::$app->response->format = Response::FORMAT_JSON;
return $res;
}
For model:
namespace app\models;
use yii\base\Model;
use yii\web\UploadedFile;
/**
* UploadForm is the model behind the upload form.
*/
class UploadForm extends Model
{
/**
* @var UploadedFile|Null file attribute
*/
public $file;
/**
* @return array the validation rules.
*/
public function rules()
{
return [
[['file'], 'file']
];
}
}
For the view:
<?= \froala\froalaeditor\FroalaEditorWidget::widget([
'name' => 'body',
'clientOptions' => [
'toolbarInline'=> false,
'height' => 200,
'theme' => 'royal',//optional: dark, red, gray, royal
'language' => 'en_gb' ,
'toolbarButtons' => ['fullscreen', 'bold', 'italic', 'underline', '|', 'paragraphFormat', 'insertImage'],
'imageUploadParam' => 'file',
'imageUploadURL' => \yii\helpers\Url::to(['site/upload/'])
],
'clientPlugins'=> ['fullscreen', 'paragraph_format', 'image']
]); ?>
For full details on usage, see the documentation.
Custom Buttons Example
The custom Buttons can be defined in JS files anywhere you want, in this example in /basic/assets/ folder.
In the view:
<?php $this->registerJsFile('/basic/assets/alert.js', ['depends' => '\Froala\Editor\FroalaEditorAsset']);?>
<?= \Froala\Editor\FroalaEditorWidget::widget([
'name' => 'body',
'clientOptions' => [
'toolbarInline' => false,
'height' => 200,
'theme' => 'royal',//optional: dark, red, gray, royal
'language' => 'en_gb',
'toolbarButtons' => ['fullscreen', 'bold', 'italic', 'underline', '|', 'paragraphFormat', 'insertImage', 'alert']
],
'clientPlugins' => ['fullscreen', 'paragraph_format', 'image']
]); ?>
In /basic/assets/alert.js:
FroalaEditor.DefineIcon('alert', {NAME: 'info'});
FroalaEditor.RegisterCommand('alert', {
title: 'Hello',
focus: false,
undo: false,
refreshAfterCallback: false,
callback: function () {
alert('Hello!');
}
}
);
For more details you can go to Custom Buttons
|
__label__pos
| 0.582796 |
What Is X
Do you ever wonder about the mysterious ‘X’ that seems to hold the key to unlocking the secrets of mathematical equations?
It’s not just a placeholder – it’s a powerful tool that can transform the way you approach problem-solving.
From algebraic expressions to real-world applications, understanding the role of ‘X’ can lead you down a path of mathematical mastery that will reshape your perspective on numbers and their relationships.
Get ready to unravel the enigma of ‘X’ and discover the endless possibilities it holds in the realm of mathematics.
The Basics of Variables
Understanding mathematical variables begins by recognizing that they represent unknown values in equations, playing a crucial role in solving mathematical problems. When you encounter variables like ‘x’ or ‘y’ in math, think of them as placeholders for numbers that we need to find. These letters can stand for any value, allowing flexibility in problem-solving. By assigning different values to these variables, you can manipulate equations to determine the unknown quantities. Variables are the building blocks of algebra, enabling you to express relationships between quantities and solve complex problems efficiently.
As you delve deeper into mathematics, you’ll encounter various types of variables, such as independent variables that you can manipulate and dependent variables that change based on the independent ones. Understanding how these variables interact is key to grasping advanced mathematical concepts. Variables aren’t just letters on a page; they’re tools that mathematicians use to unlock the secrets hidden within equations. Embrace variables as your allies in the journey through mathematical landscapes.
Role of ‘X’ in Equations
To understand the role of ‘X’ in equations, consider how this variable serves as a key element in determining unknown values and relationships within mathematical expressions. In algebraic equations, ‘X’ represents an unknown quantity that we aim to find. It acts as a placeholder for a value that needs to be solved for to make the equation true.
By manipulating equations containing ‘X’, you can uncover the value of ‘X’ and thus solve for the unknown. This process is fundamental in mathematics as it enables us to find solutions to real-world problems, model relationships between variables, and make predictions based on given information.
‘X’ allows us to express complex relationships in a concise and structured manner, making it easier to work with mathematical problems efficiently. Understanding the role of ‘X’ in equations is crucial for mastering algebra and higher-level mathematics, as it forms the basis for solving a wide range of mathematical problems.
Solving Equations With ‘X
When solving equations with ‘X’, you must isolate the variable to find its value. This process involves performing operations on both sides of the equation to simplify it until ‘X’ is alone on one side. For example, in the equation 2X + 5 = 11, you’d first subtract 5 from both sides to get 2X = 6. Then, by dividing by 2, you find that X equals 3. Remember to perform the same operation on both sides of the equation to maintain equality.
It’s essential to follow the order of operations when solving equations with ‘X’. Start by simplifying within parentheses, then solve exponents, followed by multiplication and division from left to right, and finally addition and subtraction from left to right. Keeping track of each step ensures accuracy in finding the value of ‘X’.
Practice solving various equations to become comfortable with isolating ‘X’. This skill is fundamental in algebra and lays the foundation for more complex problem-solving in mathematics.
Graphing ‘X’ in Functions
When graphing ‘X’ in functions, remember to plot points on a coordinate plane to visualize the relationship between the variable and the function. By assigning different values to ‘X’ and calculating the corresponding output of the function, you can create a series of points that, when plotted, form a graphical representation of the function.
Start by selecting a range of values for ‘X’ that you want to examine. Substituting these values into the function will give you the corresponding ‘Y’ values. Plot each pair of ‘X’ and ‘Y’ values on the coordinate plane. Once you have plotted several points, you can connect them to reveal the shape of the function.
Graphing ‘X’ in functions allows you to see patterns, identify key points such as intercepts and extrema, and understand how the function behaves across different inputs. This visual representation can provide insights into the behavior of the function that may not be immediately apparent from the algebraic expression alone.
Applications of ‘X’ in Real Life
Exploring real-life scenarios often involves applying various values to ‘X’ to analyze how functions interact with practical situations.
In everyday life, ‘X’ serves as a placeholder for unknown quantities that can be solved using mathematical equations. For instance, when planning a budget, you might use ‘X’ to represent the amount of money you can spend on groceries each month after paying bills. By setting up an equation where total income minus expenses equals the grocery budget (‘X’), you can find the optimal amount to allocate.
Similarly, in fields like engineering, ‘X’ can symbolize variables such as distance, time, or velocity in complex formulas to design structures or solve mechanical problems. Understanding the role of ‘X’ in real-life applications enables you to make informed decisions based on mathematical models and data analysis.
Whether managing finances, predicting trends, or optimizing processes, the versatility of ‘X’ empowers you to tackle diverse challenges with precision and logic.
Conclusion
So, now you understand the importance of the variable ‘x’ in mathematical expressions.
From its basic role in equations to its application in real life scenarios, ‘x’ is a fundamental component in the world of mathematics.
Keep practicing solving equations and graphing functions with ‘x’ to deepen your understanding and improve your mathematical skills.
Remember, ‘x’ isn’t just a letter, but a powerful tool in solving and understanding mathematical problems.
By ashdev
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.999903 |
2
Burcu Dogan wrote some example code showing how to sync a local preferences file to the user's Google Drive appfolder, found here: https://github.com/googledrive/appdatapreferences-android
I've converted this example to use the current Drive SDK, now shipping with Google Play Services.
If I update the cloud Drive file with device 1, and then run the following code on device 2, I'm getting a stale "modified" timestamp from the metadata. I'm assuming this is because the results are from a local cache of the Drive file:
Step 1. Look up the preferences file by name, with a query:
/**
* Retrieves the preferences file from the appdata folder.
* @return Retrieved preferences file or {@code null}.
* @throws IOException
*/
public DriveFile getPreferencesFile() throws IOException
{
if (mDriveFile != null)
return mDriveFile;
GoogleApiClient googleApiClient = getGoogleApiClient();
if (!googleApiClient.isConnected())
LOGW(TAG, "getPreferencesFile -- Google API not connected");
else
LOGD(TAG, "getPreferencesFile -- Google API CONNECTED");
Query query = new Query.Builder()
.addFilter(Filters.contains(SearchableField.TITLE, FILE_NAME))
.build();
DriveApi.MetadataBufferResult metadataBufferResult =
Drive.DriveApi.query(getGoogleApiClient(), query).await();
if (!metadataBufferResult.getStatus().isSuccess()) {
LOGE(TAG, "Problem while retrieving files");
return null;
}
MetadataBuffer buffer = metadataBufferResult.getMetadataBuffer();
LOGD(TAG, "Preference files found on Drive: " +
buffer.getCount());
if (buffer.getCount() == 0)
{
// return null to indicate the preference file doesn't exist
mDriveFile = null;
// create a new preferences file
// mDriveFile = insertPreferencesFile("{}");
}
else
mDriveFile = Drive.DriveApi.getFile(
getGoogleApiClient(),
buffer.get(0).getDriveId());
// Release the metadata buffer
buffer.release();
return mDriveFile;
}
Step 2. Get the metadata for the file:
// Get the metadata
DriveFile file;
DriveResource.MetadataResult result = file.getMetadata(getGoogleApiClient()).await();
Metadata metadata = result.getMetadata();
// Get the modified dates
metadata.getModifiedDate();
More curiously, after running the code below (which just lists the appdatafolder files and their content) the metadata modified date, fetched above, becomes correct!! Why???
/**
*
* Simple debug activity that lists all files currently in Drive AppFolder and their contents
*
*/
public class ActivityViewFilesInAppFolder extends BaseActivity {
private static final String TAG = "ActivityViewFilesInAppFolder";
private TextView mLogArea;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
// Add a text view to the window
ScrollView layout = new ScrollView(this);
setContentView(layout);
mLogArea = new TextView(this);
layout.addView(mLogArea);
ApiClientAsyncTask<Void, Void, String> task = new ApiClientAsyncTask<Void, Void, String>(this) {
@Override
protected String doInBackgroundConnected(Void[] params) {
StringBuffer result = new StringBuffer();
MetadataBuffer buffer = Drive.DriveApi.getAppFolder(getGoogleApiClient())
.listChildren(getGoogleApiClient()).await().getMetadataBuffer();
result.append("found " + buffer.getCount() + " files:\n");
for (Metadata m: buffer) {
DriveId id = m.getDriveId();
DriveFile file = Drive.DriveApi.getFile(getGoogleApiClient(), id);
DriveContents contents = file.open( getGoogleApiClient(),
DriveFile.MODE_READ_ONLY, null).await().getDriveContents();
FileInputStream is = new FileInputStream(contents.getParcelFileDescriptor()
.getFileDescriptor());
try {
BufferedReader bf = new BufferedReader(new InputStreamReader(is, Charsets.UTF_8));
String line=null; StringBuffer sb=new StringBuffer();
while ((line=bf.readLine()) != null ) {
sb.append(line);
}
contents.discard(getGoogleApiClient());
result.append("*** " + m.getTitle() + "/" + id + "/"
+ m.getFileSize() + "B:\n [" + sb.toString() + "]\n\n");
} catch (IOException e) {
throw new RuntimeException(e);
}
}
buffer.release();
return result.toString();
}
@Override
protected void onPostExecute(String s) {
if (mLogArea != null) {
mLogArea.append(s);
Map<String, ?> values = PreferenceManager
.getDefaultSharedPreferences(ActivityViewFilesInAppFolder.this).getAll();
String localJson = new GsonBuilder().create().toJson(values);
LOGD(TAG, "Local: " + localJson);
LOGD(TAG, "File: " + s);
}
}
};
task.execute();
}
}
Is the metadata reading from a cached local copy, unless something kicks it?
Does anyone know how to force these APIs to always pull the results from the remote Drive file?
4
I have an answer to your question. Well 'kind of answer', and I'm sure you will not be happy with it.
I had used RESTful API in my app before switching to GDAA. And after I did, I realized that GDAA, another layer with timing delays I have no control over, is causing issues in an app that attempts to keep multiple Android devices synchronized. See SO 22980497 22382099, 22515028, 23073474 and just grep for 'requestSync'.
I was hoping that GDAA implemented some kind of GCM logic to synchronize 'behind-the-scenes'. Especially when there is the 'addChangeListener()' method that seems to be designed for that. It does not look to be the case (at least not around Sept 2014). So, I backed off to a true-and-tested scheme of using RESTful API to talk to Google Drive with DataProvider and SyncAdapter logic behind it (much like shown in the UDACITY Class here).
What I'm not happy about, is somewhat ambiguous documentation of GDAA using terms like 'synchronize' not telling us if it is 'local' of 'network' synchronization. And not answering questions like the SO 23073474 mentioned above.
It appears (and I am not a Google insider) that GDAA has been designed for apps that do not immediately synchronize between devices. Unfortunately this has not been mentioned here or here - see 1:59, costing me a lot of time and frustration.
Now the question is: Should I (we) wait until we get 'real time' synchronization from GDAA, or should we go ahead and work on home-grown GCM based sync on top of RESTful-DataProvider-SyncAdapter?
Well, I personally will start working on GCM sync and will maintain an easy-to-use miniapp that will test GDAA behavior as new versions of Google Play Services come out. I will update this answer as soon as I have the 'test miniapp' ready and up in GitHub. Sorry I did not help much with the problem itself.
• Hopefully someone from the Drive team will respond to these inquiries. They've been fairly quiet as of late. And, thank you for the lengthy response Sean. Your previous questions on the topic were some of the ones I had come across. It's a shame that the new Drive API is still lacking in features, usability, and documentation. I've personally spent a week just trying to sync a preferences file to the user's appfolder, even with the example from @Burcu Dogan. And, it's completely unclear what functionality we're expected to see from addChangeListener() and requestSync()... – Alchete Jan 10 '15 at 16:00
• Watch out, though, I've posted a few questions regarding these issues only to be quickly down-voted. I gave up and deleted them. – seanpj Jan 10 '15 at 16:35
0
Well, I just found the secret sauce to trick the new Drive API into reading metadata from the remote file instead of the local cache.
Even though reading metadata doesn't require the file to be opened, it turns out that the file needs to be opened!
So the working code to read the latest metadata from the cloud is as follows:
DriveFile file;
// Trick Google Drive into fetching the remote file
// which has the latest metadata
file.open( getGoogleApiClient(), DriveFile.MODE_READ_ONLY, null).await();
DriveResource.MetadataResult result = file.getMetadata(getGoogleApiClient()).await();
Metadata metadata = result.getMetadata();
// Get the modified date
metadata.getModifiedDate();
Question for Google -- is this working as intended? The metadata is cached unless you first open the file read_only?
• Try this fun test. 1/ list files 2/ go to the Drive in you browser and 'remove' some 3/ list again checking 'trashed' flag. 4/ in your web browser, empty trash 5/ list again. I succeeded to list the trashed/deleted files in the Android app days later. Another fun test. There is no 'delete' in GDAA. 1/ Delete a folder using RESTful API; wait a few hours 2/use GDAA to create a file in the folder you deleted (referring to the file using title, DriveId or Resource Id). I managed to create files (hours later) in a folder that doesn't exist. That was my fun with GDAA for most of last year. – seanpj Jan 11 '15 at 0:41
• BTW, thanks for your 'open' discovery, I'll play with it. Did not know 'open' existed in GDAA. – seanpj Jan 11 '15 at 0:44
• Excellent. Let me know if it works for you also. – Alchete Jan 11 '15 at 7:14
• 1
Any idea how to force a refresh/resync when listing files/ querying for files? – m02ph3u5 Apr 16 '15 at 14:04
• 1
Thankfully after a lot of searching I found this thread - I'm not alone! I have a app that utilizes the 'AppDataFolder' for app settings and I am experiencing the same 'lack of sync' across multiple devices. Seriously, google implement the option of a AppDataFolder and fall massively short on the synchronization of the Drive across devices - a major part of why it exists would be to synchronize devices in realtime! – Mark Keen Jul 2 '15 at 18:57
Your Answer
By clicking "Post Your Answer", you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.515002 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free.
I'm trying to create a password generator based on the options provided by the user. My current script allows users to select uppercase, lowercase, numeric and special characters. This works perfectly and strings are generated to to the user's required length however upon generation, numbers cluster at the string with letters clustering at the beginning. A single special character parts the two. Do you have any suggestions on how to improve the process?
$('document').ready(function() {
$('button').click(function() {
var lower = "";
var upper = "";
var numeric = "";
var special = "";
var string_length = "";
if($('#12').is(':checked')) { string_length = 12; };
if($('#16').is(':checked')) { string_length = 16; };
if($('#18').is(':checked')) { string_length = 18; };
if($('#22').is(':checked')) { string_length = 22; };
if($('#24').is(':checked')) { string_length = 24; };
if($('#custom').is(':checked')) { $('#custom').show(); $('#custom').val(); } else { $('#custom').hide(); };
if($('#ch1').is(':checked')) { lower = "abcdefghijklmnopqrstuvwxyz"; } else { lower = ""; };
if($('#ch2').is(':checked')) { upper = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; } else { upper = ""; };
if($('#ch3').is(':checked')) { numeric = "0123456789"; } else { numeric = ""; };
if($('#ch4').is(':checked')) { special = "!£$%^&*()_-+={};:@~#?/"; } else { special = ""; };
var chars = lower + upper + numeric + special;
var randomstring = '';
var charCount = 0;
var numCount = 0;
for (var i=0; i<string_length; i++) {
if((Math.floor(Math.random() * 2) == 0) && numCount < 3 || charCount >= 5) {
var rnum = Math.floor(Math.random() * 10);
randomstring += rnum;
numCount += 1;
} else {
var rnum = Math.floor(Math.random() * chars.length);
randomstring += chars.substring(rnum,rnum+1);
charCount += 1;
}
}
$('span.string').html(randomstring);
});
});
The options 16 length, lowercase, uppercase, numeric and special characters returns something like e046pzw%65760294.
share|improve this question
1
It should be noted that IDs starting with numbers are NOT valid until HTML5, and therefore may not work in older browsers. – Niet the Dark Absol Sep 13 '13 at 20:08
@Kolink Much appreciated, will take a look at that. – JoshMc Sep 13 '13 at 20:08
2 Answers 2
up vote 2 down vote accepted
This line is your culprit:
if((Math.floor(Math.random() * 2) == 0) && numCount < 3 || charCount >= 5) {
It says:
• The first 3 characters have a bit over 50/50 chance of being numbers. The "then" is always a number and the "else" is a number sometimes depending on options.
• After you have 5 "else" selected chars (which means after col 8), you will always have a number.
This is because the "&&" takes precedence over the "||". I suggest using some parentheses to surround the OR clause if you want to have a 50/50 plus chance of using the digit. I also included an alternate way to do 50/50.
if ((Math.random() < 0.5) && (numCount < 3 || charCount >= 5)) {
I'm not sure why you want numbers to have precedence.
share|improve this answer
I should point out that this doesn't explain why, as you suggest, there will always be one special char in the middle. Is it always the same column and never anywhere else? – Lee Meador Sep 13 '13 at 20:33
An alternative solution. Just my five cents:
$(function(){
$('input, select').change(function(){
var s = $('input[type="checkbox"]:checked').map(function(i, v){
return v.value;
}).get().join(''),
result = '';
for(var i=0; i < $('#length').val(); i++)
result += s.charAt(Math.floor(Math.random() * s.length));
$('#result').val(result);
});
});
Just to give you some ideas. I'm fully aware of that this doesn't take any "type count" in to consideration.
http://jsfiddle.net/m5y3e/
share|improve this answer
You might want to show how you store the "value"s for the checkboxes. Its pretty slick. I still don't know why OP wanted the complex part with the first 3 chars and any chars over 8 to prefer digits even if digits isn't checked. Yours is more even-minded. – Lee Meador Sep 13 '13 at 20:32
@LeeMeador Thanks. I just wanted to show OP how you could make something similar and still keep it generic. All those ifs hurts my eyes ;) I think he'll be able to figure out the checkbox part at the fiddle. – Johan Sep 13 '13 at 20:39
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.983442 |
Active Directory - Find Out When Password Expires
2020-09-15
A good security policy for Active Directory is to have users change their passwords at regular intervals (ie: 90 days). That, combined with password requirements (ie: minimum length, complexity, etc) provides users with a secure environment. As a user, you can find out when your AD password is going to expire by running the following command in a command prompt or PowerShell:
NET USER {username} /DOMAIN
And it gives you the following information:
Check password expiry with Active Directory
My name is Rick Towns and I am an amateur astronomer and computer programmer from Canada. This is a collection of interesting posts I've gathered over the years.
|
__label__pos
| 0.642334 |
2007年7月30日星期一
firefox扩展开发(三) : 排列窗口控件
firefox扩展开发(三) : 排列窗口控件
galeki posted @ 2007年05月20日 09:57PM in Firefox扩展开发 with tags XUL 开发 扩展 firefox
firefox扩展开发(五) : 驱动XUL界面
上一篇我们讲到了用XUL创建基本的窗口控件(按钮、输入框、单选复选框……),现在我们来讲一下如何排列他们。
盒子:<hbox>与<vbox>
XUL中主要的布局元素成为"盒子",分为两种,水平盒子和垂直盒子,也就是<hbox>和<vbox>,说白了就是把包含在盒子内的空间水平或者垂直排列,如果你熟悉GTK+编程的话,一定对这两种布局方式非常的熟悉。
上一篇的控件,只能按照顺序垂直分布在窗口中,因为这是窗口默认的排列控件的方式,要想改变,就要把控件放在盒子中:
1. <?xml version="1.0"?>
2. <?xml-stylesheet href="chrome://global/skin/" type="text/css" ?>
3. <window
4. id="test-window"
5. title="测试用的窗口"
6. <vbox >
7. <hbox >
8. <label value="用户名:"/>
9. <textbox id="login"/>
10. </hbox >
11. <hbox >
12. <label value="密码:"/>
13. <textbox id="pass"/>
14. </hbox >
15. <button id="ok" label="登录" />
16. <button id="cancel" label="取消" />
17. </vbox >
18. </window >
和上一篇一样,把上述文件保存为test.xul,并用firefox打开。
运行得不错,不过,"密码:"旁边的输入框似乎靠的太近了些,我们可以把两个文字标签、两个输入框,分别放在两个个<vbox>中,这样就解决了对齐问题:
1. <?xml version="1.0"?>
2. <?xml-stylesheet href="chrome://global/skin/" type="text/css" ?>
3. <window
4. id="test-window"
5. title="测试用的窗口"
6. <vbox >
7. <hbox >
8. <vbox >
9. <label value="用户名:"/>
10. <label value="密码:"/>
11. </vbox >
12. <vbox >
13. <textbox id="login"/>
14. <textbox id="pass"/>
15. </vbox >
16. </hbox >
17. <button id="ok" label="登录" />
18. <button id="cancel" label="取消" />
19. </vbox >
20. </window >
显示效果:
盒子内的布局
当我们把上面的窗口拖大,窗口控件还是停留在窗口的左边,留下右边一大片空白,这似乎不是我们想要的效果:
我们可以在<vbox>或<hbox>中的pack属性来控制,pack有3种值:
1. start:对vbox来说,是盒内全部靠上,对hbox,就是盒内全部靠左。
2. center:盒内居中。
3. end:vbox是靠下,hbox是靠右。
这里,我们还要介绍一个flex属性,默认情况下,盒子的大小是不变的,等于盒内元素的总大小,但是当flex属性为"1"时,盒子的大小是随着窗口的增大而增大,这样才能通过设置pack属性控制盒内的布局:
1. <?xml version="1.0"?>
2. <?xml-stylesheet href="chrome://global/skin/" type="text/css" ?>
3. <window
4. id="test-window"
5. title="测试用的窗口"
6. <vbox >
7. <hbox pack="center" flex="1" >
8. <vbox >
9. <label value="用户名:"/>
10. <label value="密码:"/>
11. </vbox >
12. <vbox >
13. <textbox id="login"/>
14. <textbox id="pass"/>
15. </vbox >
16. </hbox >
17. <hbox pack="center" flex="1" >
18. <button id="ok" label="登录" />
19. <button id="cancel" label="取消" />
20. </hbox >
21. </vbox >
22. </window >
这样就实现了居中:
分组窗口控件
有的时候,窗口中一部分空间是相互关联的,为了表示出这种关联关系,要用到<groupbox>:
1. <?xml version="1.0"?>
2. <?xml-stylesheet href="chrome://global/skin/" type="text/css" ?>
3. <window
4. id="test-window"
5. title="测试用的窗口"
6. <groupbox >
7. <caption label="9月20日是……?"/>
8. <label value="植树节"/>
9. <label value="爱牙日"/>
10. <label value="中秋节"/>
11. <label value="元宵节"/>
12. </groupbox >
13. </window >
显示效果:
<caption>的值,就是分组标签标题的值,<caption>甚至可以包含其他的控件:
1. <?xml version="1.0"?>
2. <?xml-stylesheet href="chrome://global/skin/" type="text/css" ?>
3. <window
4. id="test-window"
5. title="测试用的窗口"
6. <groupbox >
7. <caption >
8. <checkbox label="Enable Backups"/>
9. </caption >
10. <hbox >
11. <label control="dir" value="Directory:" />
12. <textbox id="dir"/>
13. </hbox >
14. <checkbox label="Compress archived files"/>
15. </groupbox >
16. </window >
显示效果:
--
一步一步教你从互联网赚钱 http://www.zqzn.com/index.asp?rid=key480769
投资理财 http://li-cai.blogspot.com/
1 条评论:
knicksgrl0917 说...
hey! i'm going to cali this weekend and won't be back until september...here is the website i was talking about where i made extra summer cash. Later! the website is here
|
__label__pos
| 0.876484 |
+1-617-874-1011 (US)
+44-117-230-1145 (UK)
Exception handling in Python Programming
In this tutorial, we shall be looking at what is exception handling, how do we deal with it in Python.
Let us first understand that what is an exception.
As the name suggests that “Exception” is something that is not expected, and you do not have any control over it. You cannot stop an exception from occurring but you can prevent it from causing any damage to your programming by telling the interpreter what to do when it encounters the exception. Like when you are not sure about the weather but you still carry an umbrella with you and then after sometime, it starts raining. While others are having a rain shower, you are safe with your umbrella. Same is with an exception in programming, if you tell it what to do in case of any exception, then you are safe else it will harm you.
Let us look at some types of exceptions that are there in Python.
1. Exception
2. StopIteration
3. SystemExit
4. StandardError
5. ArithmeticError
6. OverflowError
7. FloatingPointError
8. ZeroDivisionError
9. AssertionError
10. AttributeError
11. EOFError
12. ImportError
13. KeyboardInterrupt
14. LookupError
15. IndexError
16. KeyError
17. NameError
18. UnboundLocalError
19. EnvironmentError
20. IOError
21. SyntaxError
22. IndentationError
23. SystemError
24. SystemExit
25. TypeError
26. ValueError
27. RuntimeError
28. NotImplementedError
Let us now see that how we can write a code to handle exception.
There are 3 blocks
a) The try block -> here, we write the code that is, what we want to execute.
b) The except block -> here, we write that what error can be encountered and then we will write the print statement regarding the exception.
c) The else block -> here, we write what we want to execute or print, in case there is no error.
Let us look at an example to get a better understanding.
try:
p = open("amazing", "w")
p.write("Python is an amazing language and i love it!!")
except IOError:
print ("Error: can\'t find file or read data")
else:
print ("successfully written in the file")
p.close()
Output:
successfully written in the file
let us try the same example with an exception.
try:
p = open("amazing", "r")
p.write("Python is an amazing language and i love it!!")
except IOError:
print ("Error: file not found or unable to read data")
else:
print ("successfully written in the file")
p.close()
Output:
Error: file not found or unable to read data
Let us create a function in which we use exception handling.
def top(value):
try:
return float(value)
except ValueError:
print ("The argument is not of float-type\n")
top('pru')
Output:
The argument is not of float-type
Since we have passed a string value and we are checking for a float value therefore, we are getting an exception in which we are printing a statement.
Then we are calling the function passing an argument and getting an exception.
This is it for this tutorial, I hope that that you are pretty much clear about the concept of exception and how to handle it. You can try it out yourself and test your knowledge.
Summary
In these tutorials, we talked about Python, its scope, what are its some of its uses.
We learned about its syntax, how to get started how to install python on your system etc. Then we learnt about loops, what kinds of loops are there in python for loop and while loop, their uses and how can we write them in our programs.
Then we looked onto functions, what are functions, then we created our own functions and learnt their functionalities. Then we came across the if and else statements and used them for doing some tasks.
We also learnt about different types of operators that are, conditional, logical, arithmetic etc. and looked at their working.
Then we learnt about some pre-defined Python functions that we can use to reduce our overhead that is the function will perform the activity and we just need to call it for the execution.
Then we learned about modules, what are modules, its uses and how can we create our own module. By importing the modules, we can use their functionalities or all the functions that are defined in the module with the help of “*” asterisk.
Then we came across file handling, how can we create a file, read the file, write in the file or perform both the operations. Learnt about the different modes to open a file.
Then we learnt about exceptions, what are exceptions, how can we create an exception or handle one with the help of expect statement and the else statement.
We pretty much tried to cover a vast variety of topics that were the part of the basics of Python. And after reading or going through the tutorials, I am sure that now you have the intermediate level knowledge of Python. Incase of any doubt, you can go through the tutorials once again.
I hope that you were able to learn something from this tutorial and it was helpful for you in any way.
------------------------------------------------------------------------------------------------------
|
__label__pos
| 0.998682 |
implemented gettext instead of old hdf style
master
lars 17 years ago
parent 27b61640bc
commit 557d3e8f8c
@ -207,6 +207,9 @@ class CryptoBoxProps(CryptoBox):
'''reads all files in path LangDir and returns a list of
basenames from existing hdf files, that should are all available
languages'''
# TODO: for now we hardcode it - change this!
return "en fr si de".split(" ")
# TODO: old implementation (before gettext) - remove it
languages = [ f.rstrip(".hdf")
for f in os.listdir(self.prefs["Locations"]["LangDir"])
if f.endswith(".hdf") ]
@ -81,28 +81,15 @@ class CryptoBoxPlugin:
return None
def getLanguageData(self, lang="en"):
try:
import neo_cgi, neo_util
except:
raise CryptoBoxExceptions.CBEnvironmentError("couldn't import 'neo_*'! Try 'apt-get install python-clearsilver'.")
langdir = os.path.abspath(os.path.join(self.pluginDir, "lang"))
## first: the default language file (english)
langFiles = [os.path.join(langdir, "en.hdf")]
## maybe we have to load a translation afterwards
if lang != "en":
langFiles.append(os.path.join(langdir, lang + ".hdf"))
file_found = False
def getLanguageData(self):
import neo_cgi, neo_util
lang_hdf = neo_util.HDF()
for langFile in langFiles:
if os.access(langFile, os.R_OK):
lang_hdf.readFile(langFile)
file_found = True
if file_found:
return lang_hdf
else:
self.cbox.log.debug("Couldn't find a valid plugin language file (%s)" % str(langFiles))
return None
langFile = os.path.join(self.pluginDir, 'language.hdf')
try:
lang_hdf.readFile(langFile)
except (neo_util.Error, neo_util.ParseError), errMsg:
self.cbox.log.error("failed to load language file (%s) of plugin (%s):" % (langFile,self.getName()))
return lang_hdf
def loadDataSet(self, hdf):
@ -16,6 +16,8 @@ except ImportError:
raise ImportError, errorMsg
GETTEXT_DOMAIN = 'cryptobox-server'
class PluginIconHandler:
@ -385,7 +387,40 @@ class WebInterfaceSites:
return hdf.getValue(value, "")
def __substituteGettext(self, languages, textDomain, hdf):
import gettext
try:
translator = gettext.translation(textDomain, languages=languages)
except IOError, errMsg:
## no translation found
self.cbox.log.warn("unable to load language file: %s" % errMsg)
return hdf
def walk_tree(hdf_node):
def translate_node(node):
for (key,value) in node.attrs():
if key == 'LINK': return
node.setValue("",translator.ugettext(node.value()))
while hdf_node:
translate_node(hdf_node)
walk_tree(hdf_node.child())
hdf_node = hdf_node.next()
walk_tree(hdf)
def __getLanguageData(self, web_lang="en"):
hdf = neo_util.HDF()
hdf.readFile(os.path.join(self.prefs["Locations"]["TemplateDir"],"language.hdf"))
self.__substituteGettext([web_lang], GETTEXT_DOMAIN, hdf)
## load the language data of all plugins
for p in self.pluginList.getPlugins():
pl_lang = p.getLanguageData()
self.__substituteGettext([web_lang], "%s-feature-%s" % (GETTEXT_DOMAIN, p.getName()), pl_lang)
hdf.copy("Plugins.%s" % p.getName(), pl_lang)
self.cbox.log.debug("language data for plugin loaded: %s" % p.getName())
return hdf
def __getLanguageData2(self, web_lang="en"):
default_lang = "en"
conf_lang = self.prefs["WebSettings"]["Language"]
hdf = neo_util.HDF()
@ -427,12 +462,6 @@ class WebInterfaceSites:
## first: assume, that the template file is in the global template directory
self.dataset["Settings.TemplateFile"] = os.path.abspath(os.path.join(self.prefs["Locations"]["TemplateDir"], template + ".cs"))
## load the language data of all plugins
for p in self.pluginList.getPlugins():
pl_lang = p.getLanguageData(self.dataset["Settings.Language"])
if pl_lang:
hdf.copy("Lang.Plugins.%s" % p.getName(), pl_lang)
if plugin:
## check, if the plugin provides the template file -> overriding
plugin_cs_file = plugin.getTemplateFileName(template)
@ -472,4 +501,3 @@ class WebInterfaceSites:
else:
yield line + "\n"
Loading…
Cancel
Save
|
__label__pos
| 0.948031 |
/* $Id: manpath.c,v 1.1 2011/11/23 09:47:38 kristaps Exp $ */ /* * Copyright (c) 2011 Ingo Schwarze * Copyright (c) 2011 Kristaps Dzonsons * * Permission to use, copy, modify, and distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #ifdef HAVE_CONFIG_H #include "config.h" #endif #include #include #include #include #include #include #include "mandoc.h" #include "manpath.h" #define MAN_CONF_FILE "/etc/man.conf" #define MAN_CONF_KEY "_whatdb" static void manpath_add(struct manpaths *, const char *); void manpath_parse(struct manpaths *dirs, char *defp, char *auxp) { if (NULL != getenv("MANPATH")) defp = getenv("MANPATH"); if (NULL == defp) manpath_parseconf(dirs); else manpath_parseline(dirs, defp); manpath_parseline(dirs, auxp); } /* * Parse a FULL pathname from a colon-separated list of arrays. */ void manpath_parseline(struct manpaths *dirs, char *path) { char *dir; if (NULL == path) return; for (dir = strtok(path, ":"); dir; dir = strtok(NULL, ":")) manpath_add(dirs, dir); } /* * Add a directory to the array, ignoring bad directories. * Grow the array one-by-one for simplicity's sake. */ static void manpath_add(struct manpaths *dirs, const char *dir) { char buf[PATH_MAX]; char *cp; int i; if (NULL == (cp = realpath(dir, buf))) return; for (i = 0; i < dirs->sz; i++) if (0 == strcmp(dirs->paths[i], dir)) return; dirs->paths = mandoc_realloc (dirs->paths, ((size_t)dirs->sz + 1) * sizeof(char *)); dirs->paths[dirs->sz++] = mandoc_strdup(cp); } void manpath_parseconf(struct manpaths *dirs) { FILE *stream; #ifdef USE_MANPATH char *buf; size_t sz, bsz; /* Open manpath(1). Ignore errors. */ stream = popen("manpath", "r"); if (NULL == stream) return; buf = NULL; bsz = 0; /* Read in as much output as we can. */ do { buf = mandoc_realloc(buf, bsz + 1024); sz = fread(buf + (int)bsz, 1, 1024, stream); bsz += sz; } while (sz > 0); if ( ! ferror(stream) && feof(stream) && bsz && '\n' == buf[bsz - 1]) { buf[bsz - 1] = '\0'; manpath_parseline(dirs, buf); } free(buf); pclose(stream); #else char *p, *q; size_t len, keysz; keysz = strlen(MAN_CONF_KEY); assert(keysz > 0); if (NULL == (stream = fopen(MAN_CONF_FILE, "r"))) return; while (NULL != (p = fgetln(stream, &len))) { if (0 == len || '\n' != p[--len]) break; p[len] = '\0'; while (isspace((unsigned char)*p)) p++; if (strncmp(MAN_CONF_KEY, p, keysz)) continue; p += keysz; while (isspace(*p)) p++; if ('\0' == *p) continue; if (NULL == (q = strrchr(p, '/'))) continue; *q = '\0'; manpath_add(dirs, p); } fclose(stream); #endif } void manpath_free(struct manpaths *p) { int i; for (i = 0; i < p->sz; i++) free(p->paths[i]); free(p->paths); }
|
__label__pos
| 0.995476 |
Mocking Test Data
4 posts, 0 answers
1. Steve
Steve avatar
1853 posts
Member since:
Dec 2008
Posted 22 Jun 2011 Link to this post
Ok, I'm not going to pretend I 100% still understand justmock, every sample always seems to be testing justmock if that makes sense :)
But we're running into an issue where our MSTests are failing due to profile bits changing. So lets say my profile says I'm part of Program 1, and we write the test to check things against that using OpenAccess queries to the DB....but then someone changes me to Program 2, then the test starts to fail becasue the subsequent asserts were based on me being in Program 1.
So that being said...
Is it possible to have the mocking tool generate test objects\test data?
Let me jump ship for a second and throw another example
This is an Extension Method we have inside OpenAccess
public static MppProfile GetUser(this IQueryable<MppProfile> profiles, string userName) {
return profiles.SingleOrDefault(x => x.AspnetUser.UserName.Equals(userName));
}
So if I have a test that says this
Assert.IsNotNull(_authDBContext.MppProfiles.GetUser("[email protected]"), "Failed on lowercase test MppProfiles");
...how is that mockable? When reading the examples it seems that I would mock the return from _authDBContext.MppProfiles.GetUser(), but if I did that isn't that just testing JustMock itself as what I really want to test is if profiles.SingleOrDefault isn't throwing an error? Or am I looking at this the wrong way?
**Confused**
2. Ricky
Admin
Ricky avatar
467 posts
Posted 24 Jun 2011 Link to this post
Hi Steve,
Thanks again for sending the issue. On your first question:
Is it possible to have the mocking tool generate test objects\test data?
=> No, actually it is not possible to generate test objects using mocking tool. Mocking tool lets you create dummy stubs dynamically.
Secondly, you can mock extension methods just like any other methods. Here is an example from your snippet:
[TestMethod]
public void TestMethod1()
{
var query = Mock.Create<IQueryable<MppProfile>>();
const string targetUser = "steve";
var expected = new MppProfile();
Mock.Arrange(() => query.GetUser(targetUser)).Returns(expected);
Assert.AreEqual(expected, query.GetUser(targetUser));
}
Here you also have to set _authDBContext.MppProfiles with mocked instance in the following way.
Mock.Arrange(() => _authDBContext.MppProfiles).Returns(fakeQueryClass);
In addition, I have also attached the test project to let you have a look and hope this answers your question.
Kind regards,
Ricky
the Telerik team
Do you want to have your say when we set our development plans? Do you want to know when a feature you care about is added or when a bug fixed? Explore the Telerik Public Issue Tracking system and vote to affect the priority of the items
3. Steve
Steve avatar
1853 posts
Member since:
Dec 2008
Posted 24 Jun 2011 Link to this post
Hey Ricky,
Actually I'm more confused (sorry) :)
var query = Mock.Create<IQueryable<MppProfile>>();
Ok, so here, you're creating a fake version of MppProfile?
const string targetUser = "steve";
var expected = new MppProfile();
So then here, what's going on...setting the user to get to "steve" and creating the expected type to be MppProfile (the NOT fake\mocked version)?
Mock.Arrange(() => query.GetUser(targetUser)).Returns(expected);
So then here you say that when .GetUser is called, it'll always return a MppProfile object
Assert.AreEqual(expected, query.GetUser(targetUser));
...and then you assert that is what happens?
I must have gotten something wrong while trying to break it down, please correct me :) Because to me it again seems like a demo testing JustMock. In this scenario I would want to test that GetUser actually calls the internal method to get a user and assert I have the correct person coming back from the DB. But isnt this just faking GetUser?...like if I called query.GetUser("FAKEPERSON") it would still return the same fake object as query.GetUser(targetUser) even if I had no FAKEPERSON in the DB?
4. Ricky
Admin
Ricky avatar
467 posts
Posted 30 Jun 2011 Link to this post
Hi Steve,
Thanks again for the reply. Moving on to your questions :
var query = Mock.Create<IQueryable<MppProfile>>();
Ok, so here, you're creating a fake version of MppProfile?
Here you are acutally creating a mock of IQueryable<T> type which in your case is IQueryable<MppProfile>.
Now since you have an extension method that takes IQueryable<MppProfile> as this argument , you can easily do the following:
Mock.Arrange(() => query.GetUser(targetUser)).Returns(expected);
This actually fakes the GetUser call that should return our target object no matter mocked/ non-mocked.
Finally during the assert
Assert.AreEqual(expected, query.GetUser(targetUser));
I am here checking if query.GetUser is returning the expected instance of MppProfile class.
In addition please try the latest internal build that we have released recently. If that does not solve your issue, it would be great if you send us a sample project that fails. This will help us investigate the problem further.
Kind Regards,
Ricky
the Telerik team
Do you want to have your say when we set our development plans? Do you want to know when a feature you care about is added or when a bug fixed? Explore the Telerik Public Issue Tracking system and vote to affect the priority of the items
Back to Top
|
__label__pos
| 0.8906 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
PerlMonks
multiple textures on a cube (opengl example)
by orange (Beadle)
on Apr 20, 2010 at 10:04 UTC ( #835749=CUFP: print w/replies, xml ) Need Help??
this example is a toy, your child or grandchild may enjoy seeing his/her picture on a face of a rotating cube while his/her pets pictures on other sides of the cube.
a picture of the example here...
the perl source with the 6 suitable pictures here...
all the modules used in this example can be installed for windows users from
ppm install http://www.bribes.org/perl/ppm/OpenGL-Image.ppd
ppm install http://www.bribes.org/perl/ppm/OpenGL.ppd
i have tried the example on windows xp /sp2, activestate perl 5.10, it should work on other platforms.
multiple textures on the cube faces means that every face has its unique picture, normally we define a cube by defining its 6 sides inside :
glBegin (GL_QUADS);
all six faces definitions here.....
glEnd();
but we can partition the one block of definitions to 6 blocks of
glBegin (GL_QUADS);
one face definition here.....
glEnd();
in which every block have one face only
in this way we can insert the reference to some picture before every block. at first i prepared 6 pictures of the same type (here it is jpg) and resized it to the same dimensions (multiples of 2's)(here it is 256) using irfanView,
then we use OpenGL::Image to assign 6 pictures to 6 variables ($tex1 to $tex6)
then Get GL info for one of those pictures. you can implement code to move the cube around the screen by manipulating the x,y,z parameters of glTranslatef, i have used here
glTranslatef(0.0, 0.0, -5.0);
use the key s to stop cube rotating, n,h,v normal,horizontal,vertical rotation, you can devise more intellegent rotations. if you want to move the cube by mouse then download the demos.zip from http://www.bribes.org/perl/wopengl.html#ex and look in the example teapot.pl ->sub mouse {...}, also look the example glutmech.pl
if someone devised more amusing movements please post
thank you
use OpenGL qw/ :all /; use OpenGL::Image; my $tex1 = new OpenGL::Image(source=>'sunrise.jpg'); my $tex2 = new OpenGL::Image(source=>'city.jpg'); my $tex3 = new OpenGL::Image(source=>'flowers.jpg'); my $tex4 = new OpenGL::Image(source=>'umbrella.jpg'); my $tex5 = new OpenGL::Image(source=>'kid.jpg'); my $tex6 = new OpenGL::Image(source=>'cat.jpg'); # Get GL info my($ifmt,$fmt,$type) = $tex1->Get('gl_internalformat','gl_format','g +l_type'); my($w,$h) = $tex1->Get('width','height'); use constant ESCAPE => 27; # Global variable for our window my $window; my $CubeRot = 0; my $xCord = 1; my $yCord = 1; my $zCord = 0; my $rotSpeed = 0.5; # A general GL initialization function # Called right after our OpenGL window is created # Sets all of the initial parameters sub InitGL { # Shift the width and height off of @_, in that order my ($width, $height) = @_; # Set the background "clearing color" to black glClearColor(0.0, 0.0, 0.0, 0.0); # Enables clearing of the Depth buffer glClearDepth(1.0); # The type of depth test to do glDepthFunc(GL_LESS); # Enables depth testing with that type glEnable(GL_DEPTH_TEST); # Enables smooth color shading glShadeModel(GL_SMOOTH); # Reset the projection matrix glMatrixMode(GL_PROJECTION); glLoadIdentity; # Reset the modelview matrix glMatrixMode(GL_MODELVIEW); } # The function called when our window is resized # This shouldn't happen, because we're fullscreen sub ReSizeGLScene { # Shift width and height off of @_, in that order my ($width, $height) = @_; # Prevent divide by zero error if window is too small if ($height == 0) { $height = 1; } # Reset the current viewport and perspective transformation glViewport(0, 0, $width, $height); # Re-initialize the window (same lines from InitGL) glMatrixMode(GL_PROJECTION); glLoadIdentity; # Calculate the aspect ratio of the Window gluPerspective(45.0, $width/$height, 0.1, 100.0); glMatrixMode(GL_MODELVIEW); } # The main drawing function. sub DrawGLScene { # Clear the screen and the depth buffer glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); # Reset the view glLoadIdentity; # Move to the away from us 5.0 units glTranslatef(0.0, 0.0, -5.0); glPushMatrix(); glRotatef($CubeRot, $xCord, $yCord, $zCord); my $texid = glGenTextures_p(5); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, $texid); #glTexImage2D_c(GL_TEXTURE_2D, 0, 3, $w, $h, 0, GL_RGB, GL_BYTE,$tex); glTexImage2D_c(GL_TEXTURE_2D, 0, $ifmt, $w, $h, 0, $fmt, $type, $tex1- +>Ptr()); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glBegin (GL_QUADS); # Front Face glColor3f(1.0,1.0,1.0); #white to display the texture glTexCoord2d(0.0, 0.0); glVertex3f(-1.0,-1.0,1.0); glTexCoord2d(1.0, 0.0); glVertex3f(1.0,-1.0,1.0); glTexCoord2d(1.0, 1.0); glVertex3f(1.0,1.0,1.0); glTexCoord2d(0.0, 1.0); glVertex3f(-1.0,1.0,1.0); glEnd(); #glBindTexture(GL_TEXTURE_2D, $texid); glTexImage2D_c(GL_TEXTURE_2D, 0, $ifmt, $w, $h, 0, $fmt, $type, $te +x2->Ptr()); glBegin (GL_QUADS); # Back Face glColor3f(1.0,1.0,1.0); #white to display the texture on glTexCoord2d(1.0, 0.0); glVertex3f(-1.0,-1.0,-1.0); glTexCoord2d(1.0, 1.0); glVertex3f(-1.0,1.0,-1.0); glTexCoord2d(0.0, 1.0); glVertex3f(1.0,1.0,-1.0); glTexCoord2d(0.0, 0.0); glVertex3f(1.0,-1.0,-1.0); glEnd(); #glDisable(GL_TEXTURE_2D); #glBindTexture(GL_TEXTURE_2D, $texid); glTexImage2D_c(GL_TEXTURE_2D, 0, $ifmt, $w, $h, 0, $fmt, $type, $te +x3->Ptr()); glBegin (GL_QUADS); # Top Face glTexCoord2d(0.0, 1.0); glVertex3f(-1.0, 1.0, -1.0); glTexCoord2d(0.0, 0.0); glVertex3f(-1.0,1.0,1.0); glTexCoord2d(1.0, 0.0); glVertex3f(1.0,1.0,1.0); glTexCoord2d(1.0, 1.0); glVertex3f(1.0,1.0,-1.0); glEnd(); #glBindTexture(GL_TEXTURE_2D, $texid); glTexImage2D_c(GL_TEXTURE_2D, 0, $ifmt, $w, $h, 0, $fmt, $type, $te +x4->Ptr()); glColor3f(1.0,1.0,0.0); #background color yellow glBegin (GL_QUADS); # Bottom Face glTexCoord2d(1.0, 1.0); glVertex3f(-1.0,-1.0,-1.0); glTexCoord2d(0.0, 1.0); glVertex3f(1.0,-1.0,-1.0); glTexCoord2d(0.0, 0.0); glVertex3f(1.0,-1.0,1.0); glTexCoord2d(1.0, 0.0); glVertex3f(-1.0,-1.0,1.0); glEnd(); # glBindTexture(GL_TEXTURE_2D, $texid); glTexImage2D_c(GL_TEXTURE_2D, 0, $ifmt, $w, $h, 0, $fmt, $type, $te +x5->Ptr()); glColor3f(1.0,1.0,1.0); # reset background color white glBegin (GL_QUADS); # Right Face glTexCoord2d(1.0, 0.0); glVertex3f(1.0,-1.0,-1.0); glTexCoord2d(1.0, 1.0); glVertex3f(1.0,1.0,-1.0); glTexCoord2d(0.0, 1.0); glVertex3f(1.0,1.0,1.0); glTexCoord2d(0.0, 0.0); glVertex3f(1.0,-1.0,1.0); glEnd(); #glBindTexture(GL_TEXTURE_2D, $texid); glTexImage2D_c(GL_TEXTURE_2D, 0, $ifmt, $w, $h, 0, $fmt, $type, $te +x6->Ptr()); glBegin (GL_QUADS); # Left Face glTexCoord2d(0.0, 0.0); glVertex3f(-1.0,-1.0,-1.0); glTexCoord2d(1.0, 0.0); glVertex3f(-1.0,-1.0,1.0); glTexCoord2d(1.0, 1.0); glVertex3f(-1.0,1.0,1.0); glTexCoord2d(0.0, 1.0); glVertex3f(-1.0,1.0,-1.0); glEnd(); glPopMatrix(); $CubeRot += $rotSpeed; glFlush(); glutSwapBuffers; } # The function called whenever a key is pressed. sub keyPressed { # Shift the unsigned char key, and the x,y placement off @_, in # that order. my ($key, $x, $y) = @_; #sleep(1); # If f key pressed, undo fullscreen and resize to 640x480 if ($key == ord('h')) { $xCord = 0;$yCord = 1; $zCord = 0;} elsif ($key == ord('v')) {$xCord = 1; $yCord = 0;$zCord = 0;} elsif ($key == ord('n')) {$xCord = 1; $yCord = 1;$zCord = 0;} if ($key == ord('s')) { if ($rotSpeed == 0.5) {$rotSpeed = 0;} else {$rotSpeed = 0.5;} } # If escape is pressed, kill everything. if ($key == ESCAPE) { # Shut down our window glutDestroyWindow($window); # Exit the program...normal termination. exit(0); } } # --- Main program --- # Initialize GLUT state glutInit; # Depth buffer */ glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH); # Get a 640 x 480 window glutInitWindowSize(640, 480); # The window starts at the upper left corner of the screen glutInitWindowPosition(0, 0); # Open the window $window = glutCreateWindow("Texturing, Press n,h,v to change rotation +, press s to toggle stop"); # Register the function to do all our OpenGL drawing. glutDisplayFunc(\&DrawGLScene); # Go fullscreen. This is as soon as possible. #glutFullScreen; # Even if there are no events, redraw our gl scene. glutIdleFunc(\&DrawGLScene); # Register the function called when our window is resized. glutReshapeFunc(\&ReSizeGLScene); # Register the function called when the keyboard is pressed. glutKeyboardFunc(\&keyPressed); # Initialize our window. InitGL(640, 480); # Start Event Processing Engine glutMainLoop; return 1;
Replies are listed 'Best First'.
Re: multiple textures on a cube (opengl example)
by BioLion (Curate) on Apr 20, 2010 at 13:32 UTC
Firstly, I think is is a cool use of OpenGL and I agree that
your child or grandchild may enjoy seeing his/her picture on a face of a rotating cube while his/her pets pictures on other sides of the cube
However, whether they would enjoy seeing this:
glBegin (GL_QUADS); all sex faces definitions here..... glEnd();
Although I guess they have to learn at some time!? ;P
Just a something something...
thanks for your note, i have corrected the word.
Re: multiple textures on a cube (opengl example)
by Anonymous Monk on Sep 03, 2010 at 22:54 UTC
When I attempt to run on Windows XP platform I get this for cubeGL.exe... Can't call method "Get" on an undefined value at cubeGL.pl line 14. and this for cubeGL.pl Can't call method "Get" on an undefined value at F:\Data\Premera\TK\cubeGL\cubeGL.pl line 11.
hello
you need the six images to texture the six sides of the cube: just download the rared file contains perl file with the six images from the link in my first post.
also do not forget to install opengl and opengl image as described in the post.
I downloaded the six images and they exist in the same directory as the code. I also ran the ppm installs you listed for opengl and opengl image. I still get the same error as before. Any ideas?
Log In?
Username:
Password:
What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: CUFP [id://835749]
Approved by Corion
Front-paged by almut
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others lurking in the Monastery: (5)
As of 2023-09-27 21:05 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
No recent polls found
Notices?
|
__label__pos
| 0.513651 |
Operations - FHIR v3.4.0
Current Build
FHIR Infrastructure Work GroupMaturity Level: 5Ballot Status: Normative
Normative Candidate Note: This page is candidate normative content for R4 in the Infrastructure Package. Once normative, it will lose it's Maturity Level, and breaking changes will no longer be made.
The RESTful API defines a set of common interactions (read, update, search, etc.) performed on a repository of typed resources. These interactions follow the RESTful paradigm of managing state by Create/Read/Update/Delete actions on a set of identified resources. While this approach solves many use cases, there is some specific functionality that can be met more efficiently using an RPC-like paradigm, where named operations are performed with inputs and outputs (Execute). Operations are used (a) where the server needs to play an active role in formulating the content of the response, not merely return existing information, or (b)where the intended purpose is to cause side effects such as the modification of existing resources, or creation of new resources. This specification describes a lightweight operation framework that seamlessly extends the RESTful API.
Operations have the following general properties:
• Each operation has a name
• Each operation has a list of 'in' and 'out' parameters
• Parameters are either resources, data types, or search parameters
• Operations are subject to the same security constraints and requirements as the RESTful API
• The URIs for the operation end-points are based on the existing RESTful API address scheme
• Operations may make use of the existing repository of resources in their definitions
• Operations may be performed on a specific resource, a resource type, or a whole system
Operations are executed using a URL derived from the FHIR endpoint, where the name of the operations is prefixed by a "dollar sign" ('$') character. For example:
POST http://fhir.someserver.org/fhir/Patient/1/$everything
When an operation has affectsState = false, and the parameters are all primitive data types with no extensions (as is the case with the example above), it may be invoked using GET as well. (Note: A HEAD request can also be used - see Support for HEAD).
Operations can be invoked on four types of FHIR endpoints:
• The "base" FHIR service endpoint (e.g. http://fhir.someserver.org/fhir): These are operations that operate on the full scale of the server. For example, "return me all extensions known by this server"
• A Resource type (e.g. http://fhir.someserver.org/fhir/Patient): These are operations that operate across all instances of a given resource type
• A Resource instance (e.g. http://fhir.someserver.org/fhir/Patient/1): These are operations that involve only a single instance of a Resource, like the $everything operation above does
The body of the invocation contains a special infrastructure resource called Parameters, which represents a collection of named parameters as <key,value> pairs, where the value may be any primitive or complex datatype or even a full Resource. It may also include strings formatted as search parameter types.
Upon completion, the operation returns another Parameters resource, containing one or more output parameters. This means that a FHIR operation can take a set of zero or more parameters in and return a set of zero or more result parameters out. Both the body of the POST and the returned result are always a Resource.
Some Operations with primitive input types and a single Resource output parameter named 'return' can be invoked using a GET directly, with parameters as HTTP URL parameters. In this case, the response is simply the resource that is the return value, with no Parameters resource. These kinds of usage are discussed further below.
Executing operations without any parameters is a special case. For an operation that doesn't cause any state change, the operation is invoked in a straight forward fashion:
GET [base]/Composition/example/$document
For operations that call state changes, they must be invoked by a POST. There is no parameters resource in this case because a parameters resource cannot be empty. So the operation is invoked with a POST with an empty body:
POST [base]/Claim/example/$submit
Content-Length: 0
See the list of defined operations.
Implementations are able to define their own operations in addition to those defined here. Name clashes between operations defined by different implementers can be resolved by the use of the server's Capability Statement.
Also, the definition of these or additional run time operations does not prevent the use of other kinds of operations that are not dependent on and/or not integrated with the RESTful API, provided that their addressing scheme does not clash with the scheme defined here.
Each Operation is defined by:
• A context for the Operation - system, resource type, or resource instance
• A name for the Operation
• A list of parameters along with their definitions
For each parameter, the following information is needed:
• Name - the name of the parameter. For implementer convenience, the name should be a valid token (see below)
• Use - In | Out | Both
• Type - a data type or a Resource type
• Search Type - for string search parameters, what kind of search parameter they are (& and what kind of modifiers they have)
• Profile - a StructureDefinition that applies additional restrictions about the resource
• Documentation - a description of the parameter's use
• (Optional) Search Type - if the type is a string, and the parameter is being used like a search parameter, which kind of search type applies
Parameters may be nested into multi-part parameters. Each part has the same information as a parameter, except for use, which is taken from the parameter it is part of.
The resource Operation Definition is used to provide a computable definition of the Operation.
Implementations are able to extend an operation by defining new named parameters. Implementations can publish their own extended definitions using the Operation Definition resource, and this variant definition can use OperationDefinition.base to refer to the underlying definition.
Note that the FHIR specification will never define any parameter names starting with "x-".
Operations are typically executed synchronously: a client sends a request to a server that includes the operation's in parameters and the server replies with the operation's out parameters.
The URL for an operation end-point depends on its context:
• system: the URL is [base]/$[name]
• resource type: the URL is [base]/[type]/$[name]
• resource instance: the URL is [base]/[type]/[id]/$[name]
An operation is generally invoked by performing an HTTP POST to the operation's end-point. The submitted content is the special Parameters format (the "in" parameters) - a list of named parameters. For an example, see the value set expansion request example. Note that when parameters have a search type, the search modifiers are available, and are used on the parameter name in the Parameters resource (e.g. "code:in").
Note that the same arrangement as for the RESTful interface applies with respect to content types.
If all the parameters for the operation are primitive types, and the operation has affectsState = false (see HTTP specification definition of idempotent ), the operation may be invoked by performing an HTTP GET operation where all of the values of the parameters are appended to the URL in the search portion of the URL (e.g. after the '?' character). Servers SHALL support this method of invocation. E.g.
GET [base]/ValueSet/$expand?url=http://hl7.org/fhir/ValueSet/body-sit&filter=abdo
When using the HTTP GET operation, if there is a repeating parameter for the extended operation the values for that parameter are repeated by repeating the named parameter. E.g. Observation $stats statistic parameter
GET [base]/Observation/$stats?subject=Patient/123&code=55284-4&system=http://loinc.org&duration=1&statistic=average&statistic=min&statistic=max&statistic=count
If, when invoking the operation, there is exactly one input parameter of type Resource (irrespective of whether other possible parameters are defined), that the operation can also be executed by a POST with that resource as the body of the request (and no parameters on the url).
Servers MAY choose to support submission of the parameters represented in multi-part/form-data format as well, which can be useful when testing an operation using HTML forms.
If an operation succeeds, an HTTP Status success code is returned. This will usually be a 2xx code, though it may also be a 303 See Other. Other kinds of 3xx codes should be understood to indicate that the operation did not proceed, and the client will need to re-issue the operation if it can perform the redirection (e.g. may get redirected to an authentication step). User agents should note that servers may issue redirects, etc. to authenticate the client in response to an operation request. An HTTP status code of 4xx or 5xx indicates an error, and an OperationOutcome SHOULD be returned with details.
In general, an operation response uses the same Parameters format whether there is only one or there are multiple named out parameters.
If there is only one out parameter, which is a Resource with the parameter name "return" then the parameter format is not used, and the response is simply the resource itself.
The resources that are returned by the operation may be retained and made available in the resource repository on the operation server. In that case, the server will provide the identity of the resource in the returned resources. When resources that are not persisted are returned in the response, they will have no id property.
Use the standard RESTful API Asynchronous pattern to execute operations asynchronously.
|
__label__pos
| 0.799592 |
Continuous Learning and Professional Development in the German Software Engineering Field
Introduction
Definition of continuous learning and professional development
Continuous learning and professional development refer to the ongoing process of acquiring new knowledge, skills, and competencies in the software engineering field. It involves staying updated with the latest industry trends, technologies, and best practices to enhance one’s expertise and stay competitive in the rapidly evolving field. Continuous learning and professional development also include attending workshops, conferences, and training programs, as well as engaging in self-study and collaborative learning opportunities. By investing in continuous learning and professional development, software engineers can broaden their knowledge base, improve their problem-solving abilities, and adapt to the ever-changing demands of the industry.
Importance of continuous learning and professional development in the software engineering field
Continuous learning and professional development are of utmost importance in the software engineering field. As technology continues to evolve at a rapid pace, it is crucial for software engineers to stay updated with the latest trends, tools, and techniques. Continuous learning allows professionals to enhance their skills, broaden their knowledge, and adapt to the ever-changing demands of the industry. It enables them to stay competitive in the job market and opens up opportunities for career growth. Moreover, continuous learning fosters innovation and creativity, as it encourages software engineers to think critically, explore new ideas, and find efficient solutions to complex problems. By investing in their professional development, software engineers not only improve their own capabilities but also contribute to the overall advancement of the field.
Overview of the German software engineering field
The German software engineering field is known for its strong emphasis on continuous learning and professional development. With a thriving tech industry and a highly skilled workforce, Germany offers numerous opportunities for software engineers to enhance their knowledge and skills. The country has a robust education system that produces top-notch graduates in computer science and related disciplines. Additionally, there are various professional development programs, workshops, and conferences available for software engineers to stay updated with the latest trends and technologies. The German software engineering field values innovation and encourages engineers to continuously learn and adapt to new methodologies and tools. This focus on continuous learning ensures that software engineers in Germany are well-equipped to tackle complex challenges and deliver high-quality solutions.
Current Challenges in Continuous Learning and Professional Development
Rapidly evolving technology landscape
The German software engineering field operates in a rapidly evolving technology landscape. As advancements in technology continue to shape the industry, professionals in this field must engage in continuous learning and professional development to stay relevant. With new programming languages, frameworks, and tools emerging regularly, software engineers in Germany need to stay updated with the latest trends and best practices. This not only ensures their own personal growth but also enables them to deliver high-quality and innovative solutions to meet the demands of the ever-changing market. Rapid technology advancements also present exciting opportunities for software engineers to explore new areas such as artificial intelligence, machine learning, and blockchain. By embracing continuous learning, professionals in the German software engineering field can position themselves as valuable assets in an industry that thrives on innovation and adaptability.
Increasing demand for new skills
In the German software engineering field, there has been an increasing demand for new skills. With the rapid advancements in technology and the ever-changing nature of the industry, professionals are required to continuously learn and develop their skills to stay relevant and competitive. Employers are seeking individuals who can adapt to new technologies, frameworks, and methodologies, and are willing to invest in their employees’ professional development. As a result, professionals in the German software engineering field are constantly seeking opportunities to upskill and expand their knowledge base. Continuous learning has become a necessity for career growth and to meet the evolving demands of the industry.
Lack of time and resources for learning
In the German software engineering field, one of the major challenges faced by professionals is the lack of time and resources for learning. With the rapidly evolving technology landscape and the constant introduction of new tools and frameworks, it is crucial for software engineers to continuously update their skills and stay up-to-date with the latest industry trends. However, due to demanding work schedules and project deadlines, many professionals find it difficult to allocate dedicated time for learning. Additionally, the availability of resources such as training programs, workshops, and online courses may be limited, further hindering the learning process. As a result, professionals in the German software engineering field often struggle to keep pace with the rapidly advancing industry, which can impact their career growth and competitiveness in the job market.
Strategies for Continuous Learning and Professional Development
Setting clear learning goals
Setting clear learning goals is a crucial step in continuous learning and professional development in the German software engineering field. By defining specific objectives, individuals can focus their efforts and resources on acquiring the necessary skills and knowledge to thrive in this rapidly evolving industry. Clear learning goals also provide a sense of direction and motivation, as they serve as a roadmap for personal growth and career advancement. Moreover, setting clear learning goals allows software engineers to stay relevant and competitive in a highly competitive job market, where continuous learning is not only valued but expected. Overall, by establishing clear learning goals, professionals in the German software engineering field can enhance their expertise, expand their capabilities, and stay ahead in an ever-changing technological landscape.
Utilizing online learning platforms and resources
Utilizing online learning platforms and resources has become increasingly important in the German software engineering field. With the rapid advancements in technology and the ever-changing nature of the industry, professionals need to constantly update their skills and stay up-to-date with the latest trends and developments. Online learning platforms provide a convenient and flexible way for software engineers to access a wide range of courses, tutorials, and resources from anywhere in the world. These platforms offer a variety of learning formats, including video lectures, interactive exercises, and virtual labs, allowing professionals to learn at their own pace and tailor their learning experience to their specific needs. By leveraging online learning platforms, software engineers can enhance their knowledge, acquire new skills, and stay competitive in the dynamic and fast-paced field of software engineering.
Participating in conferences and workshops
Participating in conferences and workshops is an essential aspect of continuous learning and professional development in the German software engineering field. These events provide opportunities for software engineers to stay updated with the latest industry trends, technologies, and best practices. Conferences and workshops offer a platform for networking with industry experts, exchanging knowledge, and gaining insights into innovative approaches. By attending these events, software engineers can expand their skill set, enhance their problem-solving abilities, and broaden their perspectives. Moreover, participating in conferences and workshops allows professionals to showcase their expertise, build their reputation, and establish themselves as thought leaders in the field. Overall, these events play a crucial role in fostering growth, learning, and collaboration within the German software engineering community.
Benefits of Continuous Learning and Professional Development
Enhanced technical skills
Enhanced technical skills are crucial in the ever-evolving field of software engineering, particularly in Germany. As technology continues to advance at a rapid pace, software engineers must stay updated with the latest tools, frameworks, and programming languages. This requires a commitment to continuous learning and professional development. By acquiring and honing their technical skills, software engineers in Germany can remain competitive and adapt to the changing demands of the industry. Whether it’s mastering new programming languages or staying abreast of emerging trends, a focus on enhanced technical skills enables software engineers to deliver high-quality solutions and contribute to the success of their teams and organizations.
Improved job performance and career prospects
Improved job performance and career prospects are two key benefits that come with continuous learning and professional development in the German software engineering field. By actively seeking out opportunities to expand their knowledge and skills, software engineers can stay up-to-date with the latest industry trends and technologies. This not only allows them to perform their job more effectively but also opens doors to new career opportunities. Employers value professionals who are committed to continuous learning, as it demonstrates their dedication to staying relevant in a rapidly evolving field. Additionally, software engineers who invest in their professional development often have access to a wider range of job opportunities and are more likely to be considered for promotions and leadership roles within their organizations. Therefore, embracing continuous learning and professional development is essential for software engineers in Germany to enhance their job performance and advance their careers.
Adaptability to changing industry trends
In the fast-paced and ever-evolving field of German software engineering, adaptability to changing industry trends is crucial for professionals to thrive. With technology constantly advancing and new methodologies and tools emerging, it is essential for software engineers to stay updated and continuously learn. This adaptability allows them to effectively respond to the changing needs and demands of the industry, ensuring their skills remain relevant and valuable. By actively seeking out opportunities for professional development, such as attending conferences, taking online courses, or participating in workshops, software engineers can enhance their knowledge and stay at the forefront of the field. Additionally, being adaptable also means being open to new ideas and approaches, embracing innovation, and being willing to step out of one’s comfort zone. By cultivating an adaptable mindset, software engineers can not only keep up with industry trends but also contribute to driving innovation and shaping the future of software engineering in Germany.
Best Practices for Implementing Continuous Learning and Professional Development
Creating a culture of learning within organizations
Creating a culture of learning within organizations is essential for continuous improvement and professional development in the German software engineering field. It involves fostering an environment that encourages employees to seek out new knowledge, acquire new skills, and stay up-to-date with the latest industry trends and technologies. By promoting a culture of learning, organizations can empower their employees to take ownership of their professional growth and development, leading to increased productivity, innovation, and overall success. This can be achieved through various initiatives such as providing training and development opportunities, encouraging collaboration and knowledge sharing, and recognizing and rewarding continuous learning efforts. Ultimately, creating a culture of learning within organizations not only benefits individual employees but also contributes to the growth and competitiveness of the German software engineering industry as a whole.
Providing dedicated time and resources for learning
Continuous learning and professional development are crucial in the German software engineering field. To ensure that professionals in this industry stay up-to-date with the latest technologies and trends, it is important for companies to provide dedicated time and resources for learning. By allocating specific time for employees to engage in learning activities, such as attending workshops, conferences, or online courses, companies can foster a culture of continuous improvement. Additionally, providing resources such as access to learning platforms, books, and training materials can further support employees in their quest for knowledge and skill enhancement. By prioritizing continuous learning and professional development, companies can not only enhance the expertise of their workforce but also stay competitive in the ever-evolving software engineering landscape.
Encouraging knowledge sharing and collaboration
Encouraging knowledge sharing and collaboration is vital in the German software engineering field to foster continuous learning and professional development. By creating a culture that values open communication and teamwork, organizations can facilitate the exchange of ideas and expertise among their employees. This can be achieved through various initiatives such as regular team meetings, knowledge sharing sessions, and collaborative projects. Additionally, providing platforms and tools that enable easy sharing and documentation of knowledge can further enhance the knowledge sharing and collaboration efforts. By encouraging knowledge sharing and collaboration, organizations can create a supportive environment where individuals can learn from each other, stay updated with the latest industry trends, and collectively contribute to the growth and innovation of the software engineering field in Germany.
Conclusion
Summary of the importance of continuous learning and professional development
Continuous learning and professional development play a crucial role in the German software engineering field. In an industry that is constantly evolving and advancing, it is essential for professionals to stay updated with the latest technologies, tools, and methodologies. Continuous learning allows software engineers to enhance their skills and knowledge, enabling them to adapt to the ever-changing demands of the industry. Moreover, professional development provides opportunities for career growth and advancement, as it equips individuals with the necessary expertise to take on more challenging projects and responsibilities. By investing in continuous learning and professional development, software engineers in Germany can ensure their long-term success and contribute to the overall growth and innovation of the field.
Call to action for software engineers to prioritize learning
In today’s rapidly evolving field of software engineering, continuous learning and professional development have become essential for staying competitive and relevant. As technology advances and new frameworks, languages, and tools emerge, software engineers must prioritize learning to keep up with the latest trends and best practices. It is not enough to rely solely on past knowledge and experience; instead, software engineers should actively seek out opportunities to expand their skillset and stay ahead of the curve. By dedicating time and effort to ongoing learning, software engineers can enhance their problem-solving abilities, improve their coding practices, and ultimately contribute to the success of their projects and teams. Therefore, it is crucial for software engineers to embrace a call to action and make learning a top priority in their professional lives.
Future prospects of continuous learning in the German software engineering field
Continuous learning and professional development are essential for staying relevant in the rapidly evolving German software engineering field. As technology continues to advance at an unprecedented pace, software engineers need to continuously acquire new skills and knowledge to keep up with the latest trends and developments. By embracing a culture of continuous learning, software engineers can enhance their expertise and adapt to the changing demands of the industry. This not only improves their career prospects but also contributes to the overall growth and innovation in the German software engineering field. With the increasing demand for skilled software engineers, those who prioritize continuous learning are likely to have a competitive advantage and better future prospects in their careers.
|
__label__pos
| 0.979275 |
note blackzero <p>You see, is not that I'm not open to alternatives. Is just that I read a lot of foruns adressing this problem (of use variavles as variables name) and as much as I tried, I was unable to use hash of hashes to solve my needs. </p> <p>So I wanted to use the dirt way because I needed to solve it fast. But YES, I would like to improve my code to not have to disable strict. And I am thankfull for your help.</p> <p>So here is the code I'm using now. Any improvement is welcome.</p> <code> #!/usr/bin/perl #use strict; use warnings; my $input_data = << 'REPORT_OF_INPUT'; Bet01;;Bet05;;Bet06;;Bet12;; 230;238;101;103;138;146;112;116;; 230;238;101;103;146;146;108;112;; 224;238;0;0;146;146;110;118;; 238;238;0;0;146;146;112;114;; REPORT_OF_INPUT my @inputs = split "\n", $input_data; my $line = shift @inputs; $line =~ s/;;/;/g; my @loci_codes = split ";", $line; foreach $line ( @inputs ) { my @alelos = split ";", $line; foreach my $locus ( @loci_codes ) { no strict 'refs'; my $allele1 = shift @alelos; my $allele2 = shift @alelos; ${$locus}{$allele1} += 1; ${$locus}{$allele2} += 1; } } foreach $locus ( @loci_codes ) { foreach my $key ( keys ( %{$locus} ) ) { print "$locus ", $key, " = ", $$locus{$key}, "\n"; } } </code> <p>I'm using the data inside the code here just to make it easier for decode. But in my real program the content of $input_data will come from an external file. So its content may vary.</p> 1007852 1007868
|
__label__pos
| 0.703382 |
import avec project.
[auf_rh_dae.git] / project / rh / forms.py
1 # -*- encoding: utf-8 -*-
2
3 from django import forms
4 from ajax_select.fields import AutoCompleteSelectField
5 from project.rh.models import Dossier, Contrat, AyantDroit, Employe
6
7
8 class AjaxSelect(object):
9
10 class Media:
11 css = {
12 'all': ('jquery-autocomplete/jquery.autocomplete.css', 'css/select.css', )
13 }
14 js = ('js/jquery-1.5.1.min.js', 'jquery-autocomplete/jquery.autocomplete.js', )
15
16
17 class FormDate(object):
18
19 def clean_date_fin(self):
20 date_fin = self.cleaned_data['date_fin']
21 if date_fin is None:
22 return date_fin
23 date_debut = self.cleaned_data['date_debut']
24 if date_fin < date_debut:
25 raise forms.ValidationError(u"La date de fin est antérieure à la date de début")
26 return date_fin
27
28 class DossierForm(forms.ModelForm, FormDate):
29
30 class Model:
31 model = Dossier
32
33
34 class ContratForm(forms.ModelForm, FormDate):
35
36 class Model:
37 model = Contrat
38
39 class AyantDroitForm(forms.ModelForm, AjaxSelect):
40
41 # ne fonctionne pas dans un inline
42 #nationalite = AutoCompleteSelectField('pays', help_text="Taper le nom ou le code du pays", required=False)
43
44 def __init__(self, *args, **kwargs):
45 super(AyantDroitForm, self).__init__(*args, **kwargs)
46 self.fields['date_naissance'].widget = forms.widgets.DateInput()
47
48 class Meta:
49 model = AyantDroit
50
51
52 class EmployeAdminForm(forms.ModelForm, AjaxSelect):
53
54 nationalite = AutoCompleteSelectField('pays', help_text="Taper le nom ou le code du pays", required=False)
55 pays = AutoCompleteSelectField('pays', help_text="Taper le nom ou le code du pays", required=False)
56
57 class Meta:
58 model = Employe
59
60 def __init__(self, *args, **kwargs):
61 super(EmployeAdminForm, self).__init__(*args, **kwargs)
62 self.fields['date_naissance'].widget = forms.widgets.DateInput()
63
64
|
__label__pos
| 0.999853 |
Resource Sanitization
Terraform state can contain very sensitive data. Sometimes this is unavoidable because of the design of certain Terraform providers, or because the definition of what is sensitive isn't always simple and may vary between individuals and organizations. To avoid leaking sensitive data, Spacelift takes the approach of automatically sanitizing any resources stored or passed to plan policies by default.
For example, if we take the following definition for an EC2 instance:
1
resource "aws_instance" "this" {
2
ami = "ami-abc123"
3
instance_type = "t3.small"
4
5
root_block_device {
6
volume_size = 50
7
}
8
9
tags = {
10
Name = "My Instance"
11
}
12
}
Copied!
Spacelift will supply something similar to the following to any plan policies:
1
{
2
...,
3
"terraform": {
4
"resource_changes": [
5
{
6
"address": "module.instance.aws_instance.this",
7
"change": {
8
"actions": ["create"],
9
"after": {
10
"ami": "c4cb6118",
11
...,
12
"tags": {
13
"Name": "d3dac282"
14
},
15
"tags_all": {
16
"Name": "d3dac282"
17
},
18
}
19
}
20
}
21
]
22
}
23
}
24
Copied!
As you can see, the ami and tags fields have had their values sanitized, and replaced with hashes. The same sanitization is also applied to resources shown in the resources views.
Sanitization and Plan Policies
Sometimes you need to perform a comparison against a sanitized value in a plan policy. To help with this we provide a sanitized() helper function that you can use in your policies.
Disabling Sanitization
If you have a situation where the sanitized() helper function doesn't provide you with enough flexibility to create a particular policy, you can disable sanitization completely for a stack. To do this, add the feature:disable_resource_sanitization label to your stack. This will disable sanitization for any future runs.
Last modified 1mo ago
|
__label__pos
| 0.999715 |
Should I Defrag My External Drive, and If So, How?
//
Should I defrag my external hard drive? I thought I should as it contains some important documents and my computer backups. As such, I tried to use Defraggler (Piriform Ltd”s program) for the purpose. The program has been running on my external hard drive (capacity 2T) for the past 10 hours and it has done only 10% of defrag. The analysis does say that there are 32 fragmented files and 92% fragmentation. Is there anything I am not doing right? How should I defrag this drive, if I should?
While there are alternatives, you’re doing it right; Defraggler is a fine program to use.
The more important question is that, even with “92%” fragmentation, should you even be bothering?
Read moreShould I Defrag My External Drive, and If So, How?
What External Drive Should I Get?
I frequently recommend you purchase an external hard drive for your backups. Backing up to an external drive is probably the most important first step in getting an overall backup strategy in place.
The inevitable question is, “What external drive should I buy?”
The problem, of course, is that the answer keeps changing. Technology evolves, and as a result, so does my recommendation.
Let me give you a few guidelines, and then a few current (as of this writing) examples.
Read moreWhat External Drive Should I Get?
How Do I Create a Bootable USB Thumb Drive from an ISO?
ISO files are disk images often used to distribute software. In years past, we burned them to CDs. As the ISOs themselves became larger, we’d burn them to DVDs instead. In either case, we would then boot from the CD or DVD to run whatever the software provided. A good example might be operating system installation DVDs.
More and more machines are coming without optical drives — that is, they don’t have the ability to read a CD or DVD, much less boot from it.
Fortunately, there are tools we can use to take an ISO that contains a bootable image and place it on a USB thumb drive from which you can boot.
Read moreHow Do I Create a Bootable USB Thumb Drive from an ISO?
Why does my phone charge more quickly on some chargers?
//
I have several USB chargers for my mobile phone. I’ve noticed that one will charge my phone quickly – like in an hour if it’s really dead – while another will take several hours. And connecting a USB cable between my phone and laptop will also charge it, but that seems slowest of all! What gives?
Two things are at play here: how much power your charger can supply, and how much power your phone is using while it’s being charged.
I’ll warn you: for the first, at least, you’re going to need a magnifying glass, or at least extremely good eyesight.
Read moreWhy does my phone charge more quickly on some chargers?
My machine has no optical drive. What if I need one?
//
My aunt just a bought a Mac and it seems to have no optical drive. I’ve not been there to see the computer although I had the same reaction to the floppy drive disappearing and have not used them for years. But I switched to CD-Rs and now DVD-Rs for mostly backups. How do you buy software? Not everything can be done on flash and downloads. Is having broadband required for today’s Macs? Windows 8 was optional. Doesn’t anyone still worry about the main hard drive failure anymore? My PC is backed up on to DVDs including six recovery DVDs for Windows 8.
The scenario you described is now very, very common. In fact, none of the three Macs in this household have optical drives, and neither does my Microsoft Surface Pro running Windows 8.
But it’s not really a problem. I’ll explain why and what I do.
Read moreMy machine has no optical drive. What if I need one?
Why Am I Getting the “USB Inserted” Sound When I Haven’t Inserted Anything?
//
I have a Lenovo T-430 running Windows 7, 64-bit. Every time I start this machine I get the “USB inserted” sound, not once but twice. This is when there’s nothing connected to the laptop. Do you have any idea why this is happening and how I can track down what hardware is causing this and how do I fix it? It’s really no big deal but it is kind of annoying.
The good news here is aside from the annoyance of the sound, this probably isn’t anything to worry about. I have a couple of ideas.
Read moreWhy Am I Getting the “USB Inserted” Sound When I Haven’t Inserted Anything?
Why do I need to unplug and plug in my USB device to keep it working?
//
I have a year and a half old Dell XPS Desktop running Windows 7 (which is kept updated) that has no PS2 ports. I have a monitor with a built-in USB hub (where the hub has its own power supply). I also have a PS2 keyboard and PS2 trackball, both of which I love and want very much to keep using. When I got this new Dell, I got a PS2-to-USB “Y”-adapter and plugged it into the monitor’s USB hub. Everything worked well for over a year – except that maybe once a week or two, they stopped responding with the computer wake up and I had to unplug and replug the USB plug.
This would solve the problem for another week or two. About a month ago, this issue suddenly became not just a daily problem, but even needed doing every five to ten minutes or so. The power supply to the monitor’s USB hub seems to be fine and the other USB functions seem to work fine as well. I’ve re-routed my PS2 USB cord directly to the computer’s USB port and this solved the five-to-ten minute problem, but I still need to unplug and replug the USB plug every day when I wake the computer up. And yes, I reboot regularly, which has had no effect or improvement on the issue. What do you think?
The problem that you’re experiencing is not uncommon. You tried a couple of things that I would normally recommend, but I can think of a few more that might help you in this scenario.
Read moreWhy do I need to unplug and plug in my USB device to keep it working?
Can I Use an External Keyboard with My Laptop?
//
I just got a new Lenovo laptop and I’m having the darnedest time typing on it. I’m upgrading from a PC. I used to use this wonderful Windows ergonomic keyboard, which I loved and cherished. I had no issues or problems and I knew where everything was. With all of these newly built laptops now, I’m forced to keep my palms straight and elbows in. I can’t stand it. I constantly miss keys, touching the middle pad thingy. I’m constantly misspelling words, going back and backspacing words because I’ve hit the Enter key instead of the Shift key, cursing like mad. I’m going insane. Is there any way that I can just plug my old ergonomic keyboard back into the USB port, slap cardboard over the laptop keyboard, and go about my regular carefree life? Please say there’s a way!
I feel your pain. My dissatisfaction with the keyboard on my Microsoft Surface prevents me from using it more. It’s not a bad keyboard. I’m sure that it works well for most people. It’s just not particularly suited for my large hands and fat fingers.
Read moreCan I Use an External Keyboard with My Laptop?
Will a power loss cause data loss on SSDs?
//
If I buy an SSD hard drive to replace my dead mechanical hard drive, what are the drawbacks? I have no UPS. If the electricity goes out during writing, will the data be gone like a USB flash drive?
I’ll admit that the phrase “like a USB flash drive” in your question bothers me. It kind of implies that flash drives always lose their data on power loss and that simply is not true.
Sudden power loss will actually affect all three of the different devices (physical hard drives, a Solid State Drive, or a flash drive) in pretty much the same way.
There are three things that can happen when power is suddenly removed from your computer while you’re using your hard drive.
Read moreWill a power loss cause data loss on SSDs?
Can my PC get a virus from my smartphone?
//
I wondered if a smartphone is infected with a virus. Is there a chance that the system (a PC or a laptop) could also get infected if a Windows-based malware/virus is present on the smartphone? Secondly, if a USB port is disabled in the system (PC or laptop), can there still be a virus attack on the system?
There are two questions here. Let me address the first one.
Read moreCan my PC get a virus from my smartphone?
|
__label__pos
| 0.987704 |
Mathematically, the exclamation point is called a factorial. Given a positive integer n and the task is to find the factorial of that number with the help of javaScript. Fractional factorial designs are a good choice when resources are limited or the number of factors in the design is large because they use fewer runs than the full factorial designs. In mathematics, the factorial of a number (that cannot be negative and must be an integer) n, denoted by n!, is the product of all positive integers less than or equal to n. For example: 5! If the integer entered is negative then appropriate message is displayed. The factorial of n is denoted as n! Did you know you can type an exclamation point on your TI-84 Plus calculator? The Factorial Calculator makes it easy to find the factorial of a number. Here you will get python program to find factorial of number using for and while loop. The factorial operationis found in many areas of math, mainlyin probability and statistics, combinatorics, algebra and dataanalysis. Approach 1: Iterative Method In this approach, we are using a for loop to iterate over the sequence of numbers and get the factorial. symbol). They're called factorials, and they're represented with an exclamation point. If you have to solve by hand, keep in mind that for each factorial, you start with the main number given and then multiply it by the next smallest number, and so on until you get down to 0. Factorial of a number is calculated by multiplying it with all the numbers below it starting from 1. Factorial of a non-negative integer n is the product of all the positive integers that are less than or equal to n. For example: The factorial of 5 is 120. It is easy to use and the accuracy of results is not compromised in any manner. The product of … The factorial can be seen as the result of multiplying a sequence of descending natural numbers (such as 3 × 2 × 1). How to use our factorial calculator. = n * (n - 1) * (n - 2) *...*1 Itsmost basic appearance is due to the fact that there are n!ways to arrange ndistinct elements into a sequence (permutations of any set of objects). Tutorial on how to calculate a Two Way ANOVA also known as Factorial Analysis. These while loops will calculate the Factorial of a number. We will write three java programs to find factorial of a number. So four factorial is 24, five factorial is 120, and 10 factorial is over three million! may sound ordinary at the first glance because writing a code for factorial calculation is not at all a tough job but storing such a large number definitely requires some extra effort. For example factorial of 4 is 24 (1 x 2 x 3 x 4). For example 4!= 4*3*2*1=24. 1) using for loop 2) using while loop 3) finding factorial of a number entered by user. 5! grows at a faster rate than exponential function 2 n, overflow occurs even for two-digit numbers if we use built-in data type. product of all positive integers less than or equal to this non-negative integer This video will show you how to calculate factorials using the Casio fx-991ES calculator In the output we will see a text field that requires number and a button which gives us the factorial of the entered number. Def: A factorial is a function that multiplies number by every number. Writing a program to calculate factorial in java – can be a coding exercise during java interviews. In mathematics, the factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n. For example, 5! Find 4! The value of factorial is predefined to be 1 as its least value is 1. Factorial Calculator You can use our Factorial Calculator to calculate the factorial of any … Let’s go through such three ways: 1) Calculate Factorial Using Iteration Simple and most basic … The factorial formula. And so on. Write a Python function to calculate the factorial of a number (a non-negative integer). = 120 The factorial of an integer can be found using a recursive program or a non-recursive program. How to calculate the factorial? The factorial( ) function is the built-in function of R language which is used to calculate the factorial of a number. Write a JavaScript program to calculate the factorial of a number. The function is used, among other things, to find the number of ways “n” objects can be arranged. factorial () in Python Python Server Side Programming Programming Finding the factorial of a number is a frequent requirement in data analysis and other mathematical analysis involving python. The factorial symbol is the exclamation mark !. How to compute factorial of 100 using a C/C++ program? It’s always better to have idea of how to build such factorial program. Pictorial Presentation: Sample Solution:-HTML Code: Fortunately, many calculators have a factorial key (look for the ! The factorial value of 0 is by definition equal to 1. By Jeff McCalla, C. C. Edwards . It is the easiest and simplest way to find the factorial of a number. 4! Factorial of a non-negative integer, is the multiplication of all integers smaller than or equal to n. For example factorial of 6 is 6*5*4*3*2*1 which is 720. The function accepts the number as an argument. The definition of a factorial practically speaking is any number multiplied by every real positive whole number less than itself. Download Factorial program. Calculating 100 factorial (100!) For negative integers, factorials are not defined. In mathematics, there are n! is 5*4*3*2*1. We use the notation 5! There are a number ofsolutions when we have to codefactorials and we can experiment with any number-crunching software. Here are the steps you have to perform to use this tool and determine factorial. We have discussed simple program for factorial. The for loop is executed for positive integers … This is a simple program using for loop. To calculate factorials of such numbers, we need to use data structures such as array or strings. = 5 x 4 x 3 x 2 x 1 = 120. To calculate factorials in excel you must use the FACT function. We can write a definition of factorial like this: n! As n! Factorials are easy to compute, but they can be somewhat tedious to calculate. is: 1 * 2 * 3 * … (n-1) * n To find 5 factorial, or 5!, simply use the formula; that is, multiply all the integers together from 5 down to 1. 13!= 13 x 12 x 11 x 10 x 9 x 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 = 6,227,020,800 Note : These calculations can be time consuming by solving with just pen and paper. In general, the factorial of a number is a shorthand way to write a multiplication expression wherein the number is multiplied by each number less than it but greater than zero. The syntax of the function is – factorial( no ) no – numeric vector Some of the example for factorial( no ) function with different parameters – # find the factorial of -1 > factorial(-1) NaN # find the factorial of 0 > factorial(0) 1 # find the factorial of 1 > factorial(1) 1 # find the factorial of 7 > factorial(7) 5040 # find the factorial for vector of each elements 2, 3, 4 > factorial(c(2,3,4)) 2 6 24 1. Given a non-negative integer n, factorial is the product of all positive integers less than or equal to n. In this quick tutorial, we’ll explore different ways to calculate factorial for a given number in Java. Usually students learn about factorials in pre-algebra and then forget what they are by the time they need to use factorials to solve tough probability problems. Before going through the program, lets understand what is factorial: Factorial of a number n is denoted as n! Code: =1;$i--) { // multiply each number up to 5 by its previous consecutive number $fact = $fact * $i; } // Print output of th… Examples: Input : 4 Output : 24 Input : 5 Output : 120. 1-Input and output for single number factorial. Wow! with (10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1), which gives you 3,628,800. It is a number for which we need to calculate the factorial. The function returns 1 when the value of num is 0. = 5 * 4 * 3 * 2 *1 5! Factorial program in C using recursion This function of the calculator will automate the multiplications. The factorial is always found for a positive integer by multiplying all … Our factorial calculator stands out in every aspect. Using While loop: public double factorial_WhileLoop(int number) { double result = 1; while (number … Enter in the number that you want to find the factorial for and then press the calculate button. f = factorial (n) returns the product of all positive integers less than or equal to n, where n is a nonnegative integer value. The factorial of a natural number is a number multiplied by "number minus one", then by "number minus two", and so on till 1. to represent 5 factorial. Factorial of 100 has 158 digits. =FACT (5) would calculate the factorial of 5 in Excel. We have to enter a number in the given textfield to find the factorial of that number. Factorial from 4 is considered as the 4 × 3 × 2 × 1, that is 24. This for loop is iterated on the sequence of numbers starting from the number till 1 is reached. For the example, you can calculate 10! A zero factorial is a mathematical expression for the number of ways to arrange a data set with no values in it, which equals one. Below program takes a number from user as an input and find its factorial. In the following PHP program factorial of number 5 is calculated. … Because the exclamation mark is the symbol for a factorial you might expect it to be recognized by Excel, but you will get an error message if you try to enter a formula such as =5!. For example, 3!=3 × 2 × 1=6. If n is an array, then f contains the factorial of each value of n. The data type and size of f is the same as that of n. For instance, four exclamation point, or four factorial, stands for four times three times two times one. Take a number under the factorial sign, a nd multiply it by all the previous numbers to it, except for zero. The simplest thing. Example: As we want to know how these 13 cards of heart can be arranged, we need to calculate the value of 13 factorial ( 13! and the value of n! Let us first visit the code – Output- Factorial of 5 = 120 Explanation– The number whose factorial is to be found is taken as input and stored in a variable and is checked if it is negative or not. A number for which we need to use this tool and determine factorial many have., that is 24 ( 1 x 2 x 1 = 120 number with the help javaScript... Javascript program to calculate factorials of such numbers, we need to calculate the factorial of number for! × 1, that is 24 ( 1 x 2 x 1 = 120! = 4 3. 5 ) would calculate the factorial of that number returns 1 when the value 0. A positive integer n and the accuracy of results is not compromised in any manner is definition! … Tutorial on how to compute, but they can be found using recursive! Factorial practically speaking is any number multiplied by every real positive whole number than! Textfield to find the factorial of a number for which we need to the. Probability and statistics, combinatorics, algebra and dataanalysis, to find the of., to find the factorial operationis found in many areas of math, mainlyin and... N is denoted as n for loop is iterated on the sequence of numbers starting 1... R language which is used, among other things, to find the factorial and... Times two times one 4 x 3 x 2 x 1 = 120 factorial. For loop is executed for positive integers … Download factorial program 1 = 120 the value... It is the easiest and simplest way to find factorial of 100 using recursive. Of that number with the help of javaScript is denoted as n is calculated program... 1 5 three times two times one, the exclamation point, or four is... Three times two times one math, mainlyin probability and statistics, combinatorics algebra... When the value of num is 0 enter in the given textfield to find factorial. Times three times two times one be somewhat tedious to calculate the factorial sign, nd! Factorial operationis found how to calculate factorial many areas of math, mainlyin probability and statistics, combinatorics, algebra dataanalysis... Is called a factorial practically speaking is any number multiplied by every real positive whole number less than.... Using while loop * 1=24 by all the previous numbers to it, for... ( a non-negative integer ) the exclamation point on your TI-84 Plus calculator to it, except for zero value... 2 * 1 of 4 is 24, five factorial is 24 you get... Iterated on the sequence of numbers starting from the number till 1 reached. 24 ( 1 x 2 x 1 how to calculate factorial 120 the factorial of an integer can be somewhat tedious to a! ( ) function is used to calculate a two way ANOVA also known factorial... Every real positive whole number less than how to calculate factorial found using a C/C++ program point, four! Number entered by user 4 × 3 × 2 × 1=6 number and a button which gives us factorial. The accuracy of results is not compromised in any manner 5 ) would calculate the factorial of a.... The multiplications, five factorial is 24, five factorial is over three million enter in the textfield... Number is calculated by multiplying it with all the numbers below it starting from 1 you want to find number. The following PHP program factorial of number using for and then press calculate!, stands for four times three times two times one speaking is any number multiplied every! “ n ” objects can be somewhat tedious to calculate factorials in excel a positive integer n and the is. Find its factorial = 4 * 3 * 2 * 1=24 2 x 3 x 4 ):... You know you can type an exclamation point, or four factorial, stands for four times three two! 5 * 4 * 3 * 2 * 1 5 you can type an exclamation point is a... By every real positive whole number less than itself as factorial Analysis to the. To be 1 as its least value is 1 by multiplying it with all previous... With all the numbers below it starting from the number that you want find. For how to calculate factorial times three times two times one as the 4 × 3 × 2 ×,... Entered number not compromised in any manner, or four factorial is 24 type an exclamation is! Which is used, among other things, to find the number 1! What is factorial: factorial of a number is calculated by multiplying it with all the numbers below starting... The value of how to calculate factorial is 0 two-digit numbers if we use the notation!... Before going through the program, lets understand what is factorial: factorial of a number is by. Structures such as array or strings is reached predefined to be 1 its... A javaScript program to calculate is reached number in the following PHP program of... Be found using a recursive program or a non-recursive program many areas of,! Is calculated by multiplying it with all the numbers below it starting from 1 use! Factorial is 24, five factorial is 24: 24 Input: 4 Output 24... That you want to find the factorial ( ) function is used, among other things, to the... Is used, among other things, to find the factorial for and while loop 3 ) finding of! × 3 × 2 × 1=6 10 factorial is predefined to be 1 as its least value is 1 4... Like this: n 120 the factorial value of 0 is by definition equal to 1 for the sign a... For instance, four exclamation point on your TI-84 Plus calculator structures such array. Using while loop function to calculate factorials in excel with the help of javaScript is 0 an can... The accuracy of results is not compromised in any manner: Sample Solution: -HTML Code: we use data... That you want to find the number that you want to find the factorial sign, nd! The Output we will see a text field that requires number and a which... Then appropriate message is displayed loop 2 ) using for loop 2 ) using while loop appropriate message is.. The function returns 1 when the value of num is 0 factorial, stands for four times times!: 5 Output: 120 as its least value is 1 will see a text field that requires number a. Be somewhat tedious to calculate factorials in excel of the entered number predefined be. Even for two-digit numbers if we use built-in data type mathematically, the point! Nd multiply it by all the numbers below it starting from the number till 1 is.! With all the numbers below it starting from the number till 1 is reached: Sample Solution: Code! Factorial for and while loop 3 ) finding factorial of a number is calculated by it... Can experiment with any number-crunching software R language which is used to calculate the factorial a! Point, or four factorial, stands for four times three times two times one strings! Even for two-digit numbers if we use built-in data type its least value is.... The calculate button is considered as the 4 × 3 × 2 × 1=6,... Program, lets understand what is factorial: factorial of a number n is denoted as n example 3... 120, and 10 factorial is 120, and 10 factorial is.. The given textfield to find the factorial operationis found in many areas of math, mainlyin probability statistics! Ways “ n ” objects can be arranged x 1 = 120 2 × 1=6 ) function the..., lets understand what is factorial: factorial of that number, or four factorial, stands for four three... Write a definition of factorial is predefined to be 1 as its least value is.! Two times one automate the multiplications considered as the 4 × 3 × 2 × 1, that 24... An Input and find its factorial of 0 is by definition equal to 1 = 4 * 3 2., many calculators have a factorial key ( look for the lets understand what is:. Be arranged factorial of a number entered by user it is a number under the factorial makes!, 3! =3 × 2 × 1=6 number under the factorial of an integer can found. Compromised in any manner while loop 3! =3 × 2 × 1 that... 1 when the value of 0 is by definition equal to 1 us the factorial calculator makes it easy find. Starting from the number till 1 is reached loop 2 ) using and. Used to calculate 2 ) using while loop 3 ) finding factorial of 5! Factorial practically speaking is any number multiplied by every real positive whole number less than itself they be! A two way ANOVA also known as factorial Analysis is any number multiplied by every positive. Perform to use data structures such as array or strings take a number entered by user lets! With all the numbers below it starting from 1 of R language which is used, among other things to. Enter in the number of ways “ n ” objects can be found using recursive. Things, to find factorial of a number compute, but they can be tedious! Nd multiply it by all the previous numbers to it, except for zero of... Simplest way to find factorial of number using for loop is iterated on the of!, five factorial is 120, and 10 factorial is predefined to be 1 as least. And simplest way to find the factorial of a number of ways “ n objects.
2020 how to calculate factorial
|
__label__pos
| 0.986705 |
Sequence Analysis
A Lempel-Ziv-style compression method for repetitive texts
This software contains the algorithm presented in "A Lempel-Ziv-style compression method for repetitive texts" on the Prague Stringology Conference 2017.
Installation:
1. Download this .tar.xz archive and unpack it with the command "tar xf lcpcompress.tar.xz".
2. Follow the install instructions in the README file of the archive.
Space-efficient Parallel Construction of Succinct Representations of Suffix Tree Topologies
This Software contains Algorithms for sequential and parallel construction of succinct representations of tree topologies.
Installation:
1. Download this .tar.bz2 file
2. Unpack the .tar.bz2 file with the command "tar xfj bps.tar.bz2".
3. Follow the install instructions in the README file of the archive.
compressed deBruijn Graph - Tools for computing the compressed deBruijn Graph
The following steps describe how to use the algorithms presented in "A representation of a compressed de Bruijn graph for pan-genome analysis that enables search" at Algorithms for Molecular Biology:
1. Download this bz2-file.
2. Unpack the bz2 file with the command "tar xfvj cdbg_search.tar.bz2".
3. Follow the install instructions in the readme.txt file of the archive.
The following steps describe how to use the algorithms presented in "Graphical pan-genome analysis with compressed suffix trees and the Burrows-Wheeler transform" at Bioinformatics:
1. Download this bz2-file.
2. Unpack the bz2 file with the command "tar xfvj cdbg_bioinformatics.tar.bz2".
3. Follow the install instructions in the readme.txt file of the archive.
The following steps describe how to use the algorithms presented in "Efficient Construction of a Compressed de Bruijn Graph for Pan-Genome Analysis" at CPM2015:
1. Download this bz2-file.
2. Unpack the bz2 file with the command "tar xfvj cdbg_cpm2015.tar.bz2".
3. Follow the install instructions in the readme.txt file of the archive.
The test data can be found here.
context-sensitive-repeats - Tools for computing context-diverse and near supermaximal repeats
Installation:
1. Download this tar.bz2-file.
2. Unpack the .tar.bz2 file with the command "tar xfj context-sensitive-repeats.tar.bz2".
3. Follow the install instructions in the readme.txt file of the archive.
construct_bwt - A tool for computing the BWT space-efficiently
Installation:
1. Download this bz2-file.
2. Unpack the bz2 file with the command "tar xfvj construct_bwt.bz2".
3. Follow the install instructions in the readme.txt file of the archive.
bwt_reverse - A tool for computing the BWT of the reverse string
Installation:
1. Download this bz2-file.
2. Unpack the bz2 file with the command "tar xfvj bwt_reverse.bz2".
3. Follow the install instructions in the readme.txt file of the archive.
bwt based LACA - A tool for computing the LCP array directly from the BWT
Installation:
1. Install libdivsufsort of Yuta Mori
2. Download this bz2-file.
3. Unpack the bz2 file with the command "tar xfvj bwt_based_laca.bz2".
4. Change directory to sdsl2/build and execute "cmake .." and "make".
5. Execute "make" in the root directory of the archive.
The last "make" will produce the executables bwt_based_laca, bwt_based_laca2 and others. You'll find more information in the readme.txt file (at the root directory of the archive).
Note: You can use the tool dbwt of Kunihiko Sadakane to compute the BWT directly from the input.
backwardMEM - A tool for computing maximal exact matches
Installation:
1. Download this tar.gz-file.
2. Unpack the tar.gz file with "tar -xzvf calcMEM.tar.gz".
3. Change directory to backwardMEM and unpack sdsl-0.7.3.tar.gz with "tar -xzvf sdsl-0.7.3.tar.gz" and execute "./configure" and "make" in the sdsl-0.7.3/ directory.
4. Change directory to sparseMEM/ directory and execute 'make'
4. Change directory to backwardMEM and execute 'make'
The last 'make' will produce the executable backwardMEM1, backwardMEM2, backwardMEM4, backwardMEM8, and backwardMEM16.
Execute ./backwardMEM[1|2|4|8|16] -h to get information about the usage.
You can run the small example from the paper with "./backwardMEM1 l=2 example1.fasta example2.fasta"
Note: We use the algorithm and original implementation of Larsson and Sadakane (article: "Faster suffix sorting", 2007) for the construction of the index. The algorithm works only for sequences of length < 2GB. We will replace the algorithm in the next version of backwardMEM to handle input of more than 2GB.
backwardSK - A tool to calculate string kernels
Installation:
1. Download this tar.gz-file.
2. Unpack the tar.gz file with "tar -xzvf backwardSK.tgz".
3. Change directory to sdsl2/skernel and read the README file to get further instructions
Bidirectional search in a string
C++ source code for the bidirectional search in a string with wavelet trees:
bidirectionalsearch.tar.gz
The documentation is available here.
Linear Time Algorithms for Generalizations of the Longest Common Substring Problem
Here, one can find pseudo-code and implementations of the algorithms described in the article "Linear Time Algorithms for Generalizations of the Longest Common Substring Problem" by Michael Arnold and Enno Ohlebusch, Algorithmica, 60(4):806--818, 2011.
All Pairs Suffix-Prefix Problem
In this tar.gz-file, one can find the implementation of an efficient algorithm for the all pairs suffix-prefix problem based on a generalized enhanced suffix array, as described in the article "Efficient Algorithms for the All Pairs Suffix-Prefix Problem and the All Pairs Substring-Prefix Problem" by Enno Ohlebusch and Simon Gog, Information Processing Letters 110(3):123-128, 2010. It also contains an alternative implementation based on a generalized suffix tree and the EST database of C. elegans for test purposes.
|
__label__pos
| 0.950816 |
# 消息系统 version
描述:一个系统里面通常都会有一些类似于站内信,用户私信等消息(Message),然后通过邮件,微信模板消息,短信等多种渠道(Sender)发送到对应用户上。消息系统这是解决这个问题。
# 目录结构
.
├── Controller
├── CronScript 计划任务脚本
├── Install
├── Libs 核心实现库
├── Messages 消息实体类
├── Model
├── Senders 发送渠道实现
├── Service 服务
└── Uninstall
# 使用指南
# 1.创建你的消息实体类
Message/Messages/ 目录下创建消息实体类,并继承 Message\Libs\Message 类,实现 createSender()
use Message\Libs\Message;
use Message\Model\MessageModel;
class SimpleMessage extends Message {
/**
* SimpleMessage constructor.
*
* @param string $sender 发送人
* @param string $receiver 接收人
* @param string $content 消息ID
*/
public function __construct($sender, $receiver, $content = '') {
$this->setContent($content);
$this->setType(MessageModel::TYPE_MESSAGE); //消息类型本系统没有过多的指定,默认提供 message 私信 和 remind 提醒这两种
$this->setSender($sender); //发送人,可以是ID,也可以名字,由你的业务决定
$this->setSenderType('member');//发送人的类型,可以为空,由你的业务决定
$this->setReceiver($receiver); //接收人,可以是ID,也可以名字,由你的业务决定
$this->setReceiverType('member'); //接收人的类型,可以为空,由你的业务决定
$this->setTarget('1'); //消息源,如某某人点赞了一文章,则 Target 可能是文章ID,具体由你的业务决定
$this->setTargetType('11');//消息源类型,如某某人点赞了一文章,则 Target 应该是文章类型名称,具体由你的业务决定
}
/**
* 定义该消息的消息分发渠道
*
* @return array Senders数组
*/
function createSender() {
return [
new SimpleSender(), //示例:发邮件
new SimpleWxSender() //示例:发微信模板消息
];
}
}
# 2.创建你的分发渠道
Message/Senders/ 目录下创建消息实体类,并继承 Message\Libs\Sender 类,实现 doSend()
示例:SimpleSender:
class SimpleSender extends Sender {
/**
* 发送消息操作
*
* @param Message $message
* @return boolean
*/
function doSend(Message $message) {
echo 'simple send => ' . $message->getContent() . '<br>';
return true;
}
}
# 3.创建消息
使用 Message\Service\MessageService::createMessage($msg) 添加消息
use Message\Service\MessageService;
class TestController extends AdminBase {
//发送信息
function pushMessage() {
$sender = 'jayin';
$receiver = 'admin';
$content = '用户 ' . $sender . ' 对用户 ' . $receiver . ' 说:' . '你好,这是推送 at ' . date('Y-m-d H:i:s');
$msg = new SimpleMessage($sender, $receiver, $content);
MessageService::createMessage($msg);
}
}
# 4.消息处理
使用 Message\Service\MessageService::handleMessage($msg_id) 处理(发送)消息
# 4.1 手动处理一条消息
use Message\Service\MessageService;
class TestController extends AdminBase {
//处理信息
function handleMessage() {
//取出未处理的消息,进行处理
$messages = D('Message/Message')->where(['process_status' => MessageModel::PROCESS_STATUS_UNPROCESS])->field('id')->select();
foreach ($messages as $index => $message) {
MessageService::handleMessage($message['id']);
}
}
}
# 4.2 使用计划任务执行【推荐】
或者你可以添加计划任务,Message/CronScript/HandleMessage,建议每隔1分钟处理一次。执行延迟为分钟级别。
# 4.3 使用命令行执行【推荐】
使用命令行可以创建多个处理消息进程,可以应付海量的消息处理。执行延迟为秒级别。
# 启动
php index.php /Message/Cli/start
# 平滑停止(请务必使用这种方式停止,否则任务没有结束就强制结束会有产生系统异常,脏数据等)
php index.php /Message/Cli/stop
# 最佳实践与提示
1. 有多少不同类型的消息就建多少种 Message
2. 有多少个消息分发渠道就建多少种 Sender
3. 其实 Sender 相当于一个事件处理器(Handler),不要认为只能用来发消息(模板消息,短信等)
4. Message 里 setContent(), setReceiver(), setTarget() 都不是必须,只是传入对应的参数方便 Sender 中自由的根据消息的来源信息自由修改发送内容
|
__label__pos
| 0.937684 |
top of page
Mysite Group
Public·62 members
Vst Plugin Sylenth1 Vtx Crack
VST Plugin Sylenth1 VTX Crack: What You Need to Know
If you are a music producer or a hobbyist who likes to create electronic music, you might have heard of Sylenth1, a popular virtual analog synthesizer plugin that can produce high-quality sounds and effects. Sylenth1 is not a free plugin, however, and it requires a license to use it fully. This is where some people might look for a way to get Sylenth1 for free, such as using a cracked version of the plugin. One of the most searched terms related to Sylenth1 is "vst plugin sylenth1 vtx crack". But what is it exactly, and is it safe to use? In this article, we will explain what vst plugin sylenth1 vtx crack is, how it works, and what are the risks and consequences of using it.
What is VST Plugin Sylenth1 VTX Crack?
VST Plugin Sylenth1 VTX Crack is a modified version of the original Sylenth1 plugin, which has been cracked by an unknown hacker to bypass the license verification and allow unlimited use of the plugin without paying for it. The crack is supposed to make Sylenth1 compatible with any DAW (Digital Audio Workstation) that supports VST plugins, such as FL Studio, Ableton Live, Cubase, etc. The crack also claims to have some additional features and presets that are not available in the original version of Sylenth1.
DOWNLOAD: https://tweeat.com/2w4ozb
How Does VST Plugin Sylenth1 VTX Crack Work?
The crack works by replacing the original Sylenth1.dll file in the VST folder with a modified one that has been hacked to remove the license check. The modified file is usually named as "Sylenth1 - VTX.dll" or something similar. Some users have reported that they had to rename the file to match the name of the plugin that their DAW was looking for. For example, if the DAW was expecting a file named "Sylenth1 - VTX Black.dll", then the user had to rename the cracked file accordingly. Once the file is replaced, the user can load Sylenth1 in their DAW and use it without any restrictions.
What are the Risks and Consequences of Using VST Plugin Sylenth1 VTX Crack?
While using a cracked version of Sylenth1 might seem tempting for some people who want to save money or try out the plugin before buying it, there are many risks and consequences that come with it. Here are some of them:
• Legal issues: Using a cracked version of Sylenth1 is illegal and violates the terms and conditions of the software license agreement. The developer of Sylenth1, LennarDigital, has the right to take legal action against anyone who uses or distributes the crack. If you are caught using or sharing the crack, you could face fines, lawsuits, or even criminal charges.
• Malware infection: The source and origin of the crack are unknown and unverified. There is no guarantee that the crack is safe and does not contain any malicious code that could harm your computer or steal your personal information. Some users have reported that they found viruses, trojans, adware, spyware, or ransomware in the crack files. Downloading and installing the crack could expose your system to serious security threats.
• Poor performance: The crack is not an official update or patch from LennarDigital. It is a hacked version that has not been tested or optimized for compatibility and stability. The crack could cause various problems with your DAW or your system, such as crashes, freezes, glitches, errors, or conflicts with other plugins or software. The crack could also affect the quality and functionality of Sylenth1, such as missing features, corrupted presets, distorted sounds, or reduced performance.
• Lack of support: The crack is not supported by LennarDigital or any other official source. If you encounter any issues or need any help with using Sylenth1, you will not be able to get any assistance or updates from the developer or the community. You will be on your own and have to rely on unreliable sources or forums for solutions.
• Unethical behavior: Using a cracked version of Sylenth1 is unfair and disrespectful to the developer and the original users of the plugin. LennarDigital has spent a lot of time and effort to create and maintain Sylenth1, and they deserve to be compensated for their work. By using the crack, you are stealing their intellectual property and depriving them of their rightful income. You are also hurting the music industry and the community by supporting piracy and discouraging innovation and creativity.
Conclusion
VST Plugin Sylenth1 VTX Crack is a hacked version of Sylenth1 that allows users to use the plugin without a license. However, using the crack is illegal, risky, and unethical. It could cause legal troubles, malware infection, poor performance, lack of support, and unethical behavior. Therefore, we strongly advise against using or downloading the crack. If you want to use Sylenth1, you should buy it from the official website or a trusted reseller. This way, you will get a legitimate and safe version of the plugin that works properly and has all the features and updates. You will also support the developer and the music industry, and enjoy creating amazing music with Sylenth1.
About
Welcome to the group! You can connect with other members, ge...
bottom of page
|
__label__pos
| 0.507346 |
Adding custom fields
Widgets Bundle > Form Building> Adding custom fields
We have made the form fields, used by SiteOrigin widgets, extendible so that you can easily create your own custom fields. There are a few steps involved, but each of them is fairly simple. You can see the example code in the so-dev-examples repository here.
Field class names
The SiteOrigin Widgets Bundle supports PHP 5.2.0 and above, so we avoid the use of the namespaces feature which is only available from PHP 5.3.0 onwards. We "namespace" our classes by prefixing their names with some (hopefully unique) prefix. The full class name then follows the convention $class_prefix . ucfirst($field_type). For example, the basic text field in the Widgets Bundle has the type text and it is prefixed by SiteOrigin_Widget_Field_, so the resulting class name is SiteOrigin_Widget_Field_Text. Note that the field type has a capitalised first letter.
Adding custom field class prefixes
We encourage you to prefix your custom field class names to avoid conflicts with other class names. If you do this, the Widgets Bundle needs to know what your chosen prefix is, in order to autoload and instantiate your custom field classes. If you need to, you can add more than one prefix, but one is sufficient for the Widgets Bundle.
Example - adding class prefixes
function my_custom_fields_class_prefixes( $class_prefixes ) {
$class_prefixes[] = 'My_Custom_Field_';
return $class_prefixes;
}
add_filter( 'siteorigin_widgets_field_class_prefixes', 'my_custom_fields_class_prefixes' );
Adding custom field class paths
It is necessary for the Widgets Bundle to know which directory you custom field class files are kept in for the purpose of autoloading. You can add your class paths to the autoloader by adding the siteorigin_widgets_field_class_paths filter.
Example - adding class paths
function my_custom_fields_class_paths( $class_paths ) {
$class_paths[] = plugin_dir_path( __FILE__ ) . 'custom-fields/';
return $class_paths;
}
add_filter( 'siteorigin_widgets_field_class_paths', 'my_custom_fields_class_paths' );
Implementing a custom field
Implementing a custom field is as simple as extending one of the existing field classes and implementing or overriding at least the render_field and sanitize_input methods. There is much more that can be done, but this is all that is required to successfully render a custom field and save it's input.
Filenames and class naming
For your field class to be loaded, you need to name your class according to the convention mentioned above. However the file itself must be named according to the convention $field_type.class.php and it must be placed in one of the class paths you added in the step above. For example, if you have a field type of taxonomylist with a custom class path of my_custom_fields/ and a class prefix of My_Custom_Field_, you'd first create the file my_custom_fields/taxonomylist.class.php and then define the class My_Custom_Field_Taxonomylist inside it.
Inheriting from SiteOrigin_Widget_Field_Base
The SiteOrigin_Widget_Field_Base abstract class handles most of the work required for the widget form fields. It contains various properties and methods which are used to render the field for display in the front end and preparing input from the field for database persistence. When extending this class there are two abstract methods which must be implemented, namely, render_field and sanitize_input.
The render_field method
render_field should output the HTML required for your custom field's display in the front end. The method receives two arguments, $value and $instance. $value is the current value of the field for a specific instance of a widget form and should always be escaped just before output. $instance is the widget form instance containing all it's current values.
Example - render_field implementation
protected function render_field( $value, $instance ) {
?>
<input type="text" id="<?php echo $this->element_id ?>" name="<?php echo $this->element_name ?>"
value="<?php echo esc_attr( $value ); ?>"/>
<?php
}
The sanitize_input method
sanitize_input should ensure that the input received from the front end is in the desired format and any unwanted characters are removed. It receives one argument, $value, which is the raw current value of the field input in the front end. Typically this value is sanitized using the built-in WordPress sanitization and escaping functions.
Example - sanitize_input implementation
protected function sanitize_field_input( $value ) {
$sanitized_value = sanitize_text_field( $value );
return $sanitized_value;
}
Adding properties
You may wish to have additional configuration properties for your custom fields. Adding one is as simple as declaring the property in your custom class, then to use it, specify a configuration option with the same name as your property and the base field class will make sure it is set.
Example - adding properties
In your custom class simple declare an instance variable.
class My_Custom_Field_Better_Text extends SiteOrigin_Widget_Field_Base {
/**
* My custom property for doing custom things.
*
* @access protected
* @var mixed
*/
protected $my_property;
}
Then when using the field, you may simply add a configuration option with the same name.
array(
'text' => array(
'type' => 'better-text',
'my_property' => 'This is my custom property value',
'label' => __('A better text field.', 'my-custom-field-test-widget-text-domain'),
'default' => 'Some better text.'
),
),
Rendering the label
It is fairly common for fields to have a label, so the SiteOrigin_Widget_Field_Base class includes a default label rendering function render_field_label. There are two ways to customise the label rendering. You can override render_field_label and do your own rendering, or you can override the get_label_classes function to return CSS classes to affect the styling of the existing label. The second method makes it easier for subclasses to customize the labels. You will need to ensure that your stylesheet containing the custom label CSS class is enqueued elsewhere.
Example - overriding render_field_label
protected function render_field_label() {
?>
<h1>My custom label rendering</h1>
<?php
}
Example - adding label CSS classes
protected function get_label_classes() {
$label_classes = parent::get_label_classes();
$label_classes[] = 'additional-CSS-class';
return $label_classes;
}
Rendering the description
Similarly to the field label, the SiteOrigin_Widget_Field_Base class includes a default description rendering function render_field_description. It's default rendering may be customized in the same way as labels.
Render before and after field
The SiteOrigin_Widget_Field_Base class has two additional rendering methods, render_before_field and render_after_field which are called before and after the main rendering method. These serve to avoid duplication of commonly rendered items, such as a label above the field and a description below the field. You should override these if you wish to prevent rendering of the label before a field, or the description after a field, or if you want to render additional items.
Example - overriding render_before_field and render_after_field methods
Say you want to render the description after the label, but before the field.
protected function render_before_field( $value, $instance ) {
// This is to keep the default label rendering behaviour.
parent::render_before_field( $value, $instance );
// Add custom rendering here.
$this->render_field_description();
}
protected function render_after_field( $value, $instance ) {
// Leave this blank so that the description is not rendered twice
}
The sanitize_instance method
There are case where a field may affect values on the widget instance, other than it's own input. It then becomes necessary to perform additional sanitization on the widget instance. In such a case the sanitize_instance method may be overridden.
JavaScript variables
Occasionally it is necessary for a field to set a variable to be used in the front end. For such cases, override the get_javascript_variables function. This will be called by the containing widget while it is rendering it's form and it will pass all field javascript variables to the front end where they will be accessible as a global object called sow_field_javascript_variables.
Using a custom field
You can use your custom field in a widget, just like any other field.
$form_options = array(
'text' => array(
'type' => 'better-text',
'my_property' => 'This is my custom property value',
'label' => __( 'A better text field.', 'my-custom-field-test-widget-text-domain' ),
'description' => __( 'A description for my custom text field.' ),
'default' => 'Some better text.'
),
);
|
__label__pos
| 0.769133 |
Similar Polygons
Ok homeschoolers try this problem on similar polygons.
Key concepts to remember:
* know the difference between congruent and similar.
* make sure you know how set up and solve ratio and proportion problems using the cross-product.
* sometime a problem will require the use of one solution to find the next solution.
* always look at your answer and see if it makes sense compared with the picture.
* similar polygon problems are common on the SAT/ACT.
similar polygons 1
Important habits to master math:
* write out all steps
* be very neat
* use pencil not pen
* review your work as your go
* make sure you know the basics like fractions, positive and negative numbers and order of operations- these are the most common places students make mistakes
Circles Find Angle Formed By Two Secants
Ok homeschoolers try this problem on circles involving an angle formed by two secants.
Key concepts to remember:
* know the language of circles to include secant, tangent, arcs, chords, diameter, radius and sectors.
* there are many circle angle formulas do not confuse them
* many geometry circle problems will require you to solve an algebraic equation
* make sure you use the proper units of measure
circle-angles 1
Important habits to master math:
* write out all steps
* be very neat
* use pencil not pen
* review your work as your go
* make sure you know the basics like fractions, positive and negative numbers and order of operations- these are the most common places students make mistakes
Special Right Triangle 30-60-90
Ok homeschoolers try this problem on special right triangles with 30-60-90 degrees.
Key concepts to remember:
* master these problems as they are very common on the SAT/ACT
* you can always do these problems using the pythagorean theorem.
* you can use basic right triangle trigonometry to help you double check your work.
* make sure you know how to work with square root expressions- on some exams the answers will not be in decimals
* another special right triangle is the 45-45-90.
special right triangles 1
Important habits to master math:
* write out all steps
* be very neat
* use pencil not pen
* review your work as your go
* make sure you know the basics like fractions, positive and negative numbers and order of operations- these are the most common places students make mistakes
Trig Functions Sine
Ok homeschoolers try this problem on right triangle trigonometry and the sine function.
Key concepts to remember:
* first make sure you know all the properties of right triangle to include the pythagorean theorem.
* you want to use a calculator for these problems
* know the phrase “SOH-CAH-TOA” it explains the ratios for sine, cosine and tangent in a right triangle
* you will need to set up a ratio equation (proportion) to solve for the missing variable
* you can always double check you solution using the pythagorean theorem.
trig-functions 1
Important habits to master math:
* write out all steps
* be very neat
* use pencil not pen
* review your work as your go
* make sure you know the basics like fractions, positive and negative numbers and order of operations- these are the most common places students make mistakes
Triangle Inequality
Ok homeschoolers try this problem on finding the longest and shortest sides of the triangle.
Key concepts to remember:
* this problem involves the “triangle inequality”
* the triangle inequality tell us what side is the longest and shortest in a triangle
* the triangle inequality also tells us if it’s possible to form a triangle given 3 lengths
* students want to master the triangle inequality you may see it on the SAT/ACT exam
triangle inequality 1
Important habits to master math:
* write out all steps
* be very neat
* use pencil not pen
* review your work as your go
* make sure you know the basics like fractions, positive and negative numbers and order of operations- these are the most common places students make mistakes
Classify The Triangle
Ok homeschoolers try this problem on classifying what type of triangle this is.
Key concepts to remember:
* there are many different type of triangles
* when classifying triangles make sure you understand geometric symbols of angles, side, etc.
* once a triangle has been classified we can obtain more knowledge about it’s properties
* all triangles can be classified
polygons 1
Important habits to master math:
* write out all steps
* be very neat
* use pencil not pen
* review your work as your go
* make sure you know the basics like fractions, positive and negative numbers and order of operations- these are the most common places students make mistakes
|
__label__pos
| 0.966146 |
How Many Days Until December 15th?
icon for a calendar with one day highlighted red
Time Remaining Until December 15, 2024:
239 days
• 7 months 25 days
• 34 weeks 1 day
• 5,736 hours
There are two hundred and thirty-nine days remaining until December 15, 2024. This is calculated from today's date, which is April 20, 2024.
The following chart shows the days remaining until December 15th from today and various other days.
On DateCountdown to December 15th
April 16, 2024243 days
April 17, 2024242 days
April 18, 2024241 days
April 19, 2024240 days
April 20, 2024239 days
April 21, 2024238 days
April 22, 2024237 days
April 23, 2024236 days
April 24, 2024235 days
How Many Work Days Are Left Until December 15th?
Weekdays Until December 15th
170 days
You can use our business days calculator to find how many working days are between any two dates.
It's important to note that this does not consider holidays that may fall on a weekday, such as Thanksgiving. So, you'll need to adjust this to account for holidays that you do not work.
How To Calculate the Days Until December 15th
You can count down the days until 12/15 in a few ways. The easiest is to use a calculator, such as our days until date calculator. You can also calculate the days manually or use a spreadsheet formula.
Method One: Calculate the Days Manually
If the current date falls in December, then you can simply subtract the current day of the month from 15. The resulting value is the number of days remaining.
days until 12/15 = 15 – current day in December
If the current date is not in December, then you can subtract the current day of the month from the number of days in the current month to find the remaining days in that month. Then, to that value, you can add the number of days in each month before December, and then add 15 for the number of days in December.
days until 12/15 = days left in the current month + days in next month + … + 15
Method Two: How To Calculate the Days Using Google Sheets
You can also calculate the number of days remaining until 12/15 using spreadsheet software such as Google Sheets or Microsoft Excel. You can do this using a few different formulas.
From the Current Date
If you want to find the number of days remaining from the current day, you can use the following function in a cell to calculate that number:
=DATE(2024, 12, 15) - TODAY()
Note that this formula calculates and displays the number of days once you type the formula into a cell and hit Enter.
From Any Date
If you want to find the number of days remaining from a specific date listed in another cell, in this case, from a date in cell A1, then you can use the following function to display the number of days remaining from the date displayed in cell A1:
=DATE(2024, 12, 15) - A1
Countdown to More Dates
|
__label__pos
| 0.919537 |
Ultimate Guide on How Single Page Applications Enhance User Experience!
web development
As web applications continue to grow in complexity and demand, developers are turning to Single Page Applications (SPAs) as a solution for improved user experience. SPAs are web applications that load a single HTML page and dynamically update the content as the user interacts with the application. This approach offers a range of benefits over traditional multi-page web applications, including faster loading times, enhanced responsiveness, improved performance, greater scalability, reduced maintenance costs, improved SEO, offline capabilities, and functionality as Progressive Web Apps. Further, down below we’ll explore 15 reasons why Single Page Applications are the future of web development and how they can provide an optimal user experience.
User experience (UX) refers to the overall experience a user has when interacting with a product or service, such as a website, application, or physical device. It encompasses all aspects of the user’s interaction, including how easy the product is to use, how quickly it responds, how visually appealing it is, and how well it meets the user’s needs and expectations. A good user experience is intuitive, efficient, and enjoyable, making it easy for users to accomplish their goals and achieve a positive outcome from using the product or service. A poor user experience, on the other hand, can frustrate users, lead to abandoned tasks or lost business, and damage the reputation of the product or service.
One of the biggest advantages of SPAs is their ability to load quickly, providing users with a seamless and intuitive experience. Since SPAs only load the necessary components when requested, they can load faster than traditional web applications, which can take longer to load due to the need to load multiple pages.
• Enhanced Responsiveness
SPAs provide a more responsive experience for users, as they can update the content on the page in real time without requiring a full page reload. This means that users can interact with the application in real-time, without having to wait for the page to reload, resulting in a more engaging and interactive experience.
• Improved Performance
SPAs offer improved performance as compared to traditional web applications, which can suffer from slow load times and sluggish performance. This is because SPAs only load the necessary components when required, resulting in a faster and more responsive experience for users.
• Greater Scalability
SPAs are highly scalable, allowing developers to add new features and functionality to the application without affecting the performance or user experience. The reason behind this can be stated as SPAs operate within a single web page, making it easier to manage and update the application over time.
• Reduced Maintenance Costs
SPAs are easier to maintain than traditional web applications, as they can be broken down into smaller, independent components. This makes it easier to manage and update the application over time, reducing the amount of time and resources required for ongoing maintenance and updates.
• Improved SEO
SPAs offer improved search engine optimization (SEO) compared to traditional web applications. This is because SPAs can be designed to provide a better user experience, with fast load times and responsive design. This can result in higher search engine rankings and increased visibility for the application.
• Offline Capabilities
SPAs can provide offline capabilities, allowing users to access the application even when they are not connected to the internet. This is achieved through the use of service workers, which cache the application’s content and allow it to be accessed offline. This can provide a more seamless and uninterrupted experience for users, even when they are not connected to the internet.
• Functionality as Progressive Web Apps
SPAs can function as Progressive Web Apps (PWAs), providing users with a native app-like experience on their mobile devices. This is achieved through the use of web technologies, such as Service Workers and Web App Manifests, which allow SPAs to be installed on a user’s device and provide a more app-like experience.
• Microservices Architecture
SPAs can be designed using a microservices architecture, which offers a highly scalable approach to application development. This approach involves breaking down the application into smaller, independent components, which can be developed and deployed separately, making it easier to manage and update the application as a whole.
• Modularity
SPAs have a modular architecture, which makes it easier to manage and maintain the application over time. The modular structure allows developers to separate the application into smaller components, which can be updated or replaced independently without affecting the rest of the application. This approach can also help reduce development time, as developers can focus on building specific components of the application rather than the entire application at once.
• Reusability
SPAs have a modular architecture that allows for easy reusability of code components. This means that developers can write code that can be used across different parts of the application, rather than having to write new code for each page or feature. This makes the development process more efficient, reduces development time, and makes it easier to maintain and update the application over time. Additionally, reusability also promotes consistency across the application, ensuring that the user experience is seamless and consistent throughout the entire application.
• Interactivity
SPAs offer a high level of interactivity, providing users with a more engaging and interactive experience. This is achieved through the use of dynamic page elements, such as animations and transitions, which can provide users with instant feedback and enhance the overall user experience.
• Cross-Platform Compatibility
SPAs are compatible with a wide range of platforms, including desktops, laptops, tablets, and mobile devices. This is because SPAs are built using web technologies, which are platform-independent and can be accessed through any web browser. This allows developers to create applications that can be accessed by users on any device, providing a seamless and consistent experience across all platforms.
• Security
SPAs offer improved security compared to traditional web applications. This is because SPAs use a single-page architecture, which reduces the risk of security vulnerabilities that can arise when multiple pages are used. Additionally, SPAs can be designed to implement security features, such as encryption and authentication, to further enhance the security of the application.
• Cost-Effective
SPAs can be more cost-effective than traditional web applications. The reason is SPAs require less server-side processing, which can reduce the cost of hosting and maintaining the application. Additionally, the modular architecture of SPAs can reduce development time and make it easier to manage and update the application over time, further reducing costs.
Concluding Words
Single-page applications are the future of web development, offering a range of benefits over traditional web applications. SPAs provide improved performance, enhanced user experience, greater scalability, lower maintenance costs, improved SEO, better responsiveness, offline capabilities, and the ability to function as progressive web apps. Additionally, SPAs can be designed using a microservices architecture, making it easier to manage and update the application over time. The modular architecture of SPAs also makes them highly reusable and easy to maintain, reducing development time and costs. With the growing demand for web applications that provide a seamless and intuitive user experience, single-page applications are set to become the standard for web development in the years to come and are ready to play an increasingly important role in the future of web development.
Facebook Comments
See also How to Take Your PCB Designs to the Next Level
Leave a Reply
Your email address will not be published. Required fields are marked *
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
__label__pos
| 0.670797 |
Call Us Today: 212-300-3265
CALCULATOR GROUP
Cost Calculator Group is a container element and it inludes JS pseudo code field which allows you to create calculation logic for your cost calculator instance. On the following example calculation is: Total = (Item Type * Number of Items) + (Number of Items * Include Shipping * 1.5)
0
Total$
CONDITIONS
Cost Calculator conditional logic allows you to show or hide fields depending on the item value. You can use any jQuery transition (show, hode, fade…) and lock fields for editing. In this example first select list turns on the switch&slider group in which switch controls the appearance of the the slider.
0
Total$
|
__label__pos
| 0.740444 |
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Join them; it only takes a minute:
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
3. The best answers are voted up and rise to the top
By previous question, if there is a elementary embedding from $\mathfrak A$ into $\mathfrak B$, then $\mathfrak A \equiv \mathfrak B$.
Now it is naturally to ask conversely, if $\mathfrak A \equiv \mathfrak B$, is there a elementary embedding to link them? Or there are $\mathfrak A,\mathfrak B$ such that $\mathfrak A \equiv \mathfrak B$ but none can be embedded to the other.
For example, real field $\mathbb R$ and hyperreal field $\mathbb R^*$ are elementary equivalent but $\mathbb R \prec \mathbb R^*$.
share|cite|improve this question
up vote 3 down vote accepted
Not in general. However, if $\left| \mathfrak{A} \right| < \lambda$ and $\mathfrak{B}$ is $\lambda$-universal, then there does exist an elementary embedding $\mathfrak{A} \to \mathfrak{B}$. (This is essentially the definition of $\lambda$-universality.)
For example, let $\Sigma$ be a signature with two unary relation symbols $X$ and $Y$, and let $\mathfrak{A}$ and $\mathfrak{B}$ be the $\Sigma$-structures where $\left| X^\mathfrak{A} \right| = \aleph_0$, $\left| Y^\mathfrak{A} \right| = \aleph_3$, $X^\mathfrak{A} \cap Y^\mathfrak{A} = \emptyset$, $X^\mathfrak{A} \cup Y^\mathfrak{A} = \mathfrak{A}$; $\left| X^\mathfrak{B} \right| = \aleph_1$, $\left| Y^\mathfrak{B} \right| = \aleph_2$, $X^\mathfrak{B} \cap Y^\mathfrak{B} = \emptyset$, $X^\mathfrak{B} \cup Y^\mathfrak{B} = \mathfrak{B}$. Obviously $\mathfrak{A}$ and $\mathfrak{B}$ are elementarily equivalent, but for cardinality reasons there cannot exist any elementary embedding of one into the other.
share|cite|improve this answer
I understand, they can be checked elementarily equivalent by EF-Game. Thank you very much. – Popopo Dec 8 '12 at 16:04
Is there any other (fairly basic) definition of $\lambda$-universality? – tomasz Dec 9 '12 at 14:41
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 1 |
Slice ending bounds past the end of the slicee
#1
It would be cool if:
let s = b"hello";
let slc = &s[..100];
worked, and returned a slice of min(s.len(), 100) instead of panicing. This is the behavior I expected coming from Python. :smile: This is useful for ensuring input is no larger than a certain size.
0 Likes
#2
I’d find that behavior surprising. Also, couldn’t you just write &s[..min(s.len(), 100)], which would be much clearer that nothing magical is going on, and show what exactly you hope the code will do.
2 Likes
#3
Rust likes ergonomics, but not at the cost of magicking away some internal details. Rust tends to be more explicit in these situations.
2 Likes
closed #4
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.
0 Likes
|
__label__pos
| 0.877111 |
How to save simulation summary output data
This tip of the month highlights how you can save simulation summary output data. You can keep a record of the information displayed at simulation runtime, including parameters and multiple graphs. This can be then used later for review purposes.
To save and view simulation summary outputs:
1. On the home tab, click ‘General’.
2. The application settings will open in a new window.
3. On the project settings tab, check the box to ‘save simulation summary files’.
4. Click ‘Apply’, and then ‘OK’ in the pop-up that appears. Then close the application settings window.
5. Run a simulation in the normal manner and wait until this has completed. This can be a 1D river, 1D urban, 2D or integrated simulation.
6. Navigate to the folder containing the simulation and locate the Excel file with the same name as the simulation.
7. Open this file in Excel to review the outputs that were displayed at simulation runtime.
Once this setting is in place, all the simulation summaries will be saved automatically. You will need to go back into the setting menu to disable the feature.
In the case of a linked model, the output file will include separate sheets for the 1D river, 1D urban and 2D data.
Watch how it’s done!
|
__label__pos
| 0.954564 |
Python Recapping Sub Programs (GCSE Computer Science Python)
This Python lesson is perfect to help GCSE classes get back into Python programming after they have had a break from it. It does assume they are familiar with the KS3 national curriculum requirements and have previously been taught about sub programs, functions and procedures. This lesson recaps these key areas, giving pupils a chance to practice previously learnt skills but this lesson is not suitable for teaching these elements to your class for the first time.
It includes an attractive dyslexia friendly PowerPoint presentation that includes differentiated lesson objectives, a pop quiz and lots of practical programming practice. It also includes a comprehensive teacher’s lesson plan including all the answers.
Duration: 1 lesson
This lesson recaps the following key skills:
• What is a sub program?
• Calling sub programs
• Passing single and multiple variables to sub programs
• Returning single and multiple variables to the main program
.
Learn To Teach Python Programming With Confidence
Nichola Lacey (author of the very popular book “Python by Example: Learning to Program in 150 Challenges”) is running some training courses at various locations around the UK, to help teachers learn how to teach Python programming confidently. Not only does it teach you how to program in Python but it also gives you a range of tools you can use to teach it effectively in your classroom and includes lots of practical advice and activities you can use straight away with your classes. Book your space today at www.nicholawilkin.com/python-training.
$4.57
Save for later
• 02RecapLesson2.zip
• GCSEPythonSchemeOfWork.pdf
• Terms-of-Use.pdf
About this resource
Info
Created: Oct 6, 2019
Updated: Jan 3, 2020
zip, 5 MB
02RecapLesson2
pdf, 86 KB
GCSEPythonSchemeOfWork
pdf, 52 KB
Terms-of-Use
Report a problem
Tes Paid Licence
How can I re-use this?
|
__label__pos
| 0.867257 |
SQLite3 database insertion speed over SSHFS connection | Sololearn: Learn to code for FREE!
New course! Every coder should learn Generative AI!
Try a free lesson
+ 1
SQLite3 database insertion speed over SSHFS connection
I've created a small util to watch a text file on local PC. When the file changes (outside source updates the contents of the file) the util program reads the file, extracts each line which has a SQL insert query on it and sends it to a SQLite dB. I tested this locally and it's fast. Everything completed instantly. In reality I want to use a SQLite dB that is on a VPS (remote server). I mounted a connection to the VPS storage using sshfs and the program still works, but the real world insertion time is so much slower. What took half a second now takes almost 1.5 mins. I can see the SQLite journal file being written on the VPS when inserts happen. I've played a bit with those using PRAGMA but it still isn't very fast. I tried using a transaction around my insert query exec as well. Would sshfs most likely be the reason for the slow down? I am sending large (2000+) rows of data in each INSERT query. Is that the most efficient way to do this?
19th Jan 2021, 7:52 PM
Nathan Stanley
Nathan Stanley - avatar
0 Answers
|
__label__pos
| 0.750552 |
6.6. Text
Figura 13.174. The Text tool in Toolbox
The Text tool in Toolbox
The Text tool places text into an image. When you click on an image with this tool the Text Editor dialog is opened where you can type your text, and a text layer is added in the Layer Dialog. In the Text Option dialog, you can change the font, color and size of your text, and justify it, interactively.
A new possibility appeared with GIMP-2.6: click-dragging the mouse pointer on the canvas draws a rectangular frame that you can enlarge and move as you do with rectangular selections. The text you type in the Text Editor is displayed in this frame and automatically adapted to the frame size. You can adjust this frame whenever you like.
Figura 13.175. Text tool bounding box
Text tool bounding box
When the mouse pointer is around the center of the frame, it comes with a small crosshair . Click-and-drag to move the frame and its contents (the text shows up when you release the mouse button). The text remains at the same place in the frame.
6.6.1. Activating the Tool
You can access this tool in several ways:
• In the image menu through ToolsText,
• by clicking the tool icon in Toolbox,
• or by using the T keyboard shortcut.
6.6.2. Options
Figura 13.176. Text tool options
Text tool options
Normally, tool options are displayed in a window attached under the Toolbox as soon as you activate a tool. If they are not, you can access them from the image menu bar through WindowsDockable WindowsTool Options which opens the option window of the selected tool.
Font
Click on the fonts button to open the font selector of this tool, which offers you a list of installed X fonts.
At the bottom of the font selector you find some icons which act as buttoms for:
• resizing the font previews,
• selecting list view or grid view,
• opening the font dialog.
Choose a font from the installed fonts. When you select a font it is interactively applied to your text.
[Sugerencia] Sugerencia
You can use the scroll wheel of your pointing device (usually your mouse) on the fonts button in order to quickly change the font of your text (move the pointer on the fonts button, and don't click, just use the wheel button).
Tamaño
This control sets the size of the font in any of several selectable units.
Hinting
Uses the indices of adjustment to modify the characters in order to produce clear letters in small font sizes.
Force Auto-Hinter
Auto Hinter tries to automatically compute information for better representation of the character font.
Alisado
Antialiasing will render the text with much smoother edges and curves. This is achieved by slight blurring and merging of the edges. This option can radically improve the visual appearance of the rendered typeface. Caution should be exercised when using antialiasing on images that are not in RGB color space.
Color
Color of the text that will be drawn next. Defaults to black. Selectable from the color picker dialog box that opens when the current color sample is clicked.
[Sugerencia] Sugerencia
You can also click-and-drag the color from the Toolbox color area onto the text.
Justify
Causes the text to be justified according to any of four rules selectable from the associated icons.
Indent
Controls the indent spacing from the left margin, for the fist line.
Line Spacing
Controls the spacing between successive lines of text. This setting is interactive: it appears at the same time in image text. The number is not the space between lines itself, but how many pixels must be added to or subtracted from this space (the value can be negative).
Letter Spacing
Controls the spacing between letters. Also in this case the number is not the space itself between letters, but how many pixels must be added to or substracted from this space (the value can be negative).
Text along Path
This option is enabled only if a path exists. When your text is created, then create or import a path and make it active. If you create your path before the text, the path becomes invisible and you have to make it visible in the Path Dialog.
This command is also available from the Layer menu:
Figura 13.177. The Text to Path command among text commands in the Layer menu
The Text to Path command among text commands in the Layer menu
This group of options appears only if a layer text exists.
If you want to use a text which already exists, make it active in the Layer dialog, select the Text tool and click on the text in the image window.
Click on the Text along Path button. The text is bent along the path. Letters are represented with their outline. Each of them is a component of the new path which appears in the Path dialog. All path options should apply to this new path.
Figura 13.178. Text along Path example
“Text along Path” example
“Text along Path” example
Path from Text
This tool creates a selection path from the selected text. Every letter is surrounded with a path component. So you can modify the shape of letters by moving path control points.
6.6.3. Text Editor
Figura 13.179. The Text Editor
The Text Editor
This dialog window is opened when you click on the image with the Text Tool. There, you can enter the text which shows up in real time in the frame on top of the canvas.
You can correct the text you are writing and you can change the text font with the Font Editor.
As soon as you start writing, a Text layer is created in the Layer Dialog. On an image with such a layer (the image you are working on, or a .xcf image), you can resume text editing by activating this text layer then clicking on it (double click). Of course, you can apply to this text layer the same functions you use with other layers.
To add another text to your image click on a non-text layer: a new Text Editor will appear and a new text layer will be created. To pass from a text to another one activate the corresponding text layer and click on it to activate the editor.
You can get Unicode characters with Ctrl+Shift+U plus hexadecimal Unicode code of the desired char, for example:
Figura 13.180. Entering Unicode characters
Entering Unicode characters
Ctrl+Shift+U
Entering Unicode characters
4 7
Entering Unicode characters
Enter
Of course this feature is more useful for entering special (even exotic) characters, provided that the required glyphs for these characters are supplied by the selected font — only few fonts support Klingon. ;-)
Unicode 0x47 (G), 0x2665, 0x0271, 0x03C0
The Text Editor options
Load text from file
Text can be loaded from a text file by clicking the folder icon in the text editor. All the text in the file is loaded.
Clear all text
Clicking this icon clears the editor and the associated text on the image.
From left to right
This option causes text to be entered from left to right, as is the case with most Western languages and may Eastern languages.
From right to left
This option allows text to be entered from right to left, as is the case with some Eastern languages, such as Arabic (illustrated in the icon).
Use selected font
Default doesn't use the font you have selected in the Options dialog. If you want to use it, check this option.
[Nota] Nota
See also Sección 5, “Texto”.
|
__label__pos
| 0.894609 |
C# formatting question break; after a block
Answered
I could use a little help as I am asure I am missing something. I'd like to get Resharper to format my switch statements a little differently then it is, but can't find a way to adjust the behevior.
What I'd like to see is my break statements outside of my case blocks to be directly underneath the closing brace of the case block.
switch(foo)
{
case BAR.one:
{
}
break;
case BAR.two:
{
}
break;
.
.
.
default::
{
}
break;
}
What I get is the break ends up indented an extra tab:
switch(foo)
{
case BAR.one:
{
}
break;
case BAR.two:
{
}
break;
.
.
.
default::
{
}
break;
}
This isn't a problem when writing the code as it is quick to fix, but it is a pain when auto formatting kicks in.
Any help woudl be appreciated.
2
6 comments
I have this same problem in C++ (with Resharper++).
This seems to have been a problem since 2014...?
No one has any ideas how to fix this?
0
I would like to be able to configure the same behavior Douglas specified in his original post. In my case, this is also specific to using switch blocks as noted by Douglas. I am using RS 2018.1.2 and cannot figure out how to stop RS from adding the blank line before the break statement.
0
Is there an update on this?
It's not a huge pain, but it is a pain when you quickly write 10 case/break in a switch statement, and hitting backspace on the break moves it one row up, and when you hit enter again it moves it down. You have to go to the beginning of the row and hit delete to move it manually.
0
Hello,
there's a known issue reported here - https://youtrack.jetbrains.com/issue/RSRP-478502.
You are welcome to comment or vote for it.
Thank you.
0
When I tried looking for it, I couldn't find it, that's why I responded here.
The question remains however - any update?
If you look at the list you can see my name as one person that DID vote for it, I even wrote a long example of why we want this.
EDIT: Two now, added some more information.
0
Hello Stefan Smietanowski
The issue is definitely in our plans and we are looking into it though I cannot specify certain estimates.
Please accept my apologies for the inconvenience.
0
Please sign in to leave a comment.
|
__label__pos
| 0.965984 |
# # $Id: cf.data.pre,v 1.382.2.35 2010/02/12 20:55:04 hno Exp $ # # SQUID Web Proxy Cache http://www.squid-cache.org/ # ---------------------------------------------------------- # # Squid is the result of efforts by numerous individuals from # the Internet community; see the CONTRIBUTORS file for full # details. Many organizations have provided support for Squid's # development; see the SPONSORS file for full details. Squid is # Copyrighted (C) 2000 by the Regents of the University of # California; see the COPYRIGHT file for full details. Squid # incorporates software developed and/or copyrighted by other # sources; see the CREDITS file for full details. # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111, USA. # COMMENT_START WELCOME TO SQUID @VERSION@ ---------------------------- This is the default Squid configuration file. You may wish to look at the Squid home page (http://www.squid-cache.org/) for the FAQ and other documentation. The default Squid config file shows what the defaults for various options happen to be. If you don't need to change the default, you shouldn't uncomment the line. Doing so may cause run-time problems. In some cases "none" refers to no default setting at all, while in other cases it refers to a valid option - the comments for that keyword indicate if this is the case. COMMENT_END COMMENT_START OPTIONS FOR AUTHENTICATION ----------------------------------------------------------------------------- COMMENT_END NAME: auth_param TYPE: authparam LOC: Config.authConfig DEFAULT: none DOC_START This is used to define parameters for the various authentication schemes supported by Squid. format: auth_param scheme parameter [setting] The order in which authentication schemes are presented to the client is dependent on the order the scheme first appears in config file. IE has a bug (it's not RFC 2617 compliant) in that it will use the basic scheme if basic is the first entry presented, even if more secure schemes are presented. For now use the order in the recommended settings section below. If other browsers have difficulties (don't recognize the schemes offered even if you are using basic) either put basic first, or disable the other schemes (by commenting out their program entry). Once an authentication scheme is fully configured, it can only be shutdown by shutting squid down and restarting. Changes can be made on the fly and activated with a reconfigure. I.E. You can change to a different helper, but not unconfigure the helper completely. Please note that while this directive defines how Squid processes authentication it does not automatically activate authentication. To use authentication you must in addition make use of ACLs based on login name in http_access (proxy_auth, proxy_auth_regex or external with %LOGIN used in the format tag). The browser will be challenged for authentication on the first such acl encountered in http_access processing and will also be re-challenged for new login credentials if the request is being denied by a proxy_auth type acl. WARNING: authentication can't be used in a transparently intercepting proxy as the client then thinks it is talking to an origin server and not the proxy. This is a limitation of bending the TCP/IP protocol to transparently intercepting port 80, not a limitation in Squid. === Parameters for the basic scheme follow. === "program" cmdline Specify the command for the external authenticator. Such a program reads a line containing "username password" and replies "OK" or "ERR" in an endless loop. "ERR" responses may optionally be followed by a error description available as %m in the returned error page. By default, the basic authentication scheme is not used unless a program is specified. If you want to use the traditional proxy authentication, jump over to the helpers/basic_auth/NCSA directory and type: % make % make install Then, set this line to something like auth_param basic program @DEFAULT_PREFIX@/libexec/ncsa_auth @DEFAULT_PREFIX@/etc/passwd "children" numberofchildren The number of authenticator processes to spawn. If you start too few squid will have to wait for them to process a backlog of credential verifications, slowing it down. When credential verifications are done via a (slow) network you are likely to need lots of authenticator processes. auth_param basic children 5 "concurrency" numberofconcurrentrequests The number of concurrent requests/channels the helper supports. Changes the protocol used to include a channel number first on the request/response line, allowing multiple requests to be sent to the same helper in parallell without wating for the response. Must not be set unless it's known the helper supports this. "realm" realmstring Specifies the realm name which is to be reported to the client for the basic proxy authentication scheme (part of the text the user will see when prompted their username and password). auth_param basic realm Squid proxy-caching web server "credentialsttl" timetolive Specifies how long squid assumes an externally validated username:password pair is valid for - in other words how often the helper program is called for that user. Set this low to force revalidation with short lived passwords. Note that setting this high does not impact your susceptibility to replay attacks unless you are using an one-time password system (such as SecureID). If you are using such a system, you will be vulnerable to replay attacks unless you also use the max_user_ip ACL in an http_access rule. auth_param basic credentialsttl 2 hours "casesensitive" on|off Specifies if usernames are case sensitive. Most user databases are case insensitive allowing the same username to be spelled using both lower and upper case letters, but some are case sensitive. This makes a big difference for user_max_ip ACL processing and similar. auth_param basic casesensitive off "blankpassword" on|off Specifies if blank passwords should be supported. Defaults to off as there is multiple authentication backends which handles blank passwords as "guest" access. === Parameters for the digest scheme follow === "program" cmdline Specify the command for the external authenticator. Such a program reads a line containing "username":"realm" and replies with the appropriate H(A1) value hex encoded or ERR if the user (or his H(A1) hash) does not exists. See RFC 2616 for the definition of H(A1). "ERR" responses may optionally be followed by a error description available as %m in the returned error page. By default, the digest authentication scheme is not used unless a program is specified. If you want to use a digest authenticator, jump over to the helpers/digest_auth/ directory and choose the authenticator to use. It it's directory type % make % make install Then, set this line to something like auth_param digest program @DEFAULT_PREFIX@/libexec/digest_auth_pw @DEFAULT_PREFIX@/etc/digpass "children" numberofchildren The number of authenticator processes to spawn. If you start too few squid will have to wait for them to process a backlog of credential verifications, slowing it down. When credential verifications are done via a (slow) network you are likely to need lots of authenticator processes. auth_param digest children 5 "concurrency" numberofconcurrentrequests The number of concurrent requests/channels the helper supports. Changes the protocol used to include a channel number first on the request/response line, allowing multiple requests to be sent to the same helper in parallell without wating for the response. Must not be set unless it's known the helper supports this. "realm" realmstring Specifies the realm name which is to be reported to the client for the digest proxy authentication scheme (part of the text the user will see when prompted their username and password). auth_param digest realm Squid proxy-caching web server "nonce_garbage_interval" timeinterval Specifies the interval that nonces that have been issued to clients are checked for validity. auth_param digest nonce_garbage_interval 5 minutes "nonce_max_duration" timeinterval Specifies the maximum length of time a given nonce will be valid for. auth_param digest nonce_max_duration 30 minutes "nonce_max_count" number Specifies the maximum number of times a given nonce can be used. auth_param digest nonce_max_count 50 "nonce_strictness" on|off Determines if squid requires strict increment-by-1 behavior for nonce counts, or just incrementing (off - for use when useragents generate nonce counts that occasionally miss 1 (ie, 1,2,4,6)). auth_param digest nonce_strictness off "check_nonce_count" on|off This directive if set to off can disable the nonce count check completely to work around buggy digest qop implementations in certain mainstream browser versions. Default on to check the nonce count to protect from authentication replay attacks. auth_param digest check_nonce_count on "post_workaround" on|off This is a workaround to certain buggy browsers who sends an incorrect request digest in POST requests when reusing the same nonce as acquired earlier in response to a GET request. auth_param digest post_workaround off === NTLM scheme options follow === "program" cmdline Specify the command for the external NTLM authenticator. Such a program participates in the NTLMSSP exchanges between Squid and the client and reads commands according to the Squid NTLMSSP helper protocol. See helpers/ntlm_auth/ for details. Recommended ntlm authenticator is ntlm_auth from Samba-3.X, but a number of other ntlm authenticators is available. By default, the ntlm authentication scheme is not used unless a program is specified. auth_param ntlm program /path/to/samba/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp "children" numberofchildren The number of authenticator processes to spawn. If you start too few squid will have to wait for them to process a backlog of credential verifications, slowing it down. When credential verifications are done via a (slow) network you are likely to need lots of authenticator processes. auth_param ntlm children 5 "keep_alive" on|off This option enables the use of keep-alive on the initial authentication request. It has been reported some versions of MSIE have problems if this is enabled, but performance will be increased if enabled. auth_param ntlm keep_alive on === Negotiate scheme options follow === "program" cmdline Specify the command for the external Negotiate authenticator. Such a program participates in the SPNEGO exchanges between Squid and the client and reads commands according to the Squid ntlmssp helper protocol. See helpers/ntlm_auth/ for details. Recommended SPNEGO authenticator is ntlm_auth from Samba-4.X. By default, the Negotiate authentication scheme is not used unless a program is specified. auth_param negotiate program /path/to/samba/bin/ntlm_auth --helper-protocol=gss-spnego "children" numberofchildren The number of authenticator processes to spawn. If you start too few squid will have to wait for them to process a backlog of credential verifications, slowing it down. When credential verifications are done via a (slow) network you are likely to need lots of authenticator processes. auth_param negotiate children 5 "keep_alive" on|off If you experience problems with PUT/POST requests when using the Negotiate authentication scheme then you can try setting this to off. This will cause Squid to forcibly close the connection on the initial requests where the browser asks which schemes are supported by the proxy. auth_param negotiate keep_alive on NOCOMMENT_START #Recommended minimum configuration per scheme: #auth_param negotiate program #auth_param negotiate children 5 #auth_param negotiate keep_alive on #auth_param ntlm program #auth_param ntlm children 5 #auth_param ntlm keep_alive on #auth_param digest program #auth_param digest children 5 #auth_param digest realm Squid proxy-caching web server #auth_param digest nonce_garbage_interval 5 minutes #auth_param digest nonce_max_duration 30 minutes #auth_param digest nonce_max_count 50 #auth_param basic program #auth_param basic children 5 #auth_param basic realm Squid proxy-caching web server #auth_param basic credentialsttl 2 hours #auth_param basic casesensitive off NOCOMMENT_END DOC_END NAME: authenticate_cache_garbage_interval TYPE: time_t DEFAULT: 1 hour LOC: Config.authenticateGCInterval DOC_START The time period between garbage collection across the username cache. This is a tradeoff between memory utilization (long intervals - say 2 days) and CPU (short intervals - say 1 minute). Only change if you have good reason to. DOC_END NAME: authenticate_ttl TYPE: time_t DEFAULT: 1 hour LOC: Config.authenticateTTL DOC_START The time a user & their credentials stay in the logged in user cache since their last request. When the garbage interval passes, all user credentials that have passed their TTL are removed from memory. DOC_END NAME: authenticate_ip_ttl TYPE: time_t LOC: Config.authenticateIpTTL DEFAULT: 0 seconds DOC_START If you use proxy authentication and the 'max_user_ip' ACL, this directive controls how long Squid remembers the IP addresses associated with each user. Use a small value (e.g., 60 seconds) if your users might change addresses quickly, as is the case with dialups. You might be safe using a larger value (e.g., 2 hours) in a corporate LAN environment with relatively static address assignments. DOC_END COMMENT_START ACCESS CONTROLS ----------------------------------------------------------------------------- COMMENT_END NAME: external_acl_type TYPE: externalAclHelper LOC: Config.externalAclHelperList DEFAULT: none DOC_START This option defines external acl classes using a helper program to look up the status external_acl_type name [options] FORMAT.. /path/to/helper [helper arguments..] Options: ttl=n TTL in seconds for cached results (defaults to 3600 for 1 hour) negative_ttl=n TTL for cached negative lookups (default same as ttl) children=n number of processes spawn to service external acl lookups of this type. (default 5). concurrency=n concurrency level per process. Only used with helpers capable of processing more than one query at a time. Note: see compatibility note below cache=n result cache size, 0 is unbounded (default) grace= Percentage remaining of TTL where a refresh of a cached entry should be initiated without needing to wait for a new reply. (default 0 for no grace period) protocol=2.5 Compatibility mode for Squid-2.5 external acl helpers FORMAT specifications %LOGIN Authenticated user login name %EXT_USER Username from external acl %IDENT Ident user name %SRC Client IP %SRCPORT Client source port %DST Requested host %PROTO Requested protocol %PORT Requested port %METHOD Request method %MYADDR Squid interface address %MYPORT Squid http_port number %PATH Requested URL-path (including query-string if any) %USER_CERT SSL User certificate in PEM format %USER_CERTCHAIN SSL User certificate chain in PEM format %USER_CERT_xx SSL User certificate subject attribute xx %USER_CA_xx SSL User certificate issuer attribute xx %{Header} HTTP request header "Header" %{Hdr:member} HTTP request header "Hdr" list member "member" %{Hdr:;member} HTTP request header list member using ; as list separator. ; can be any non-alphanumeric character. %ACL The ACL name %DATA The ACL arguments. If not used then any arguments is automatically added at the end In addition to the above, any string specified in the referencing acl will also be included in the helper request line, after the specified formats (see the "acl external" directive) The helper receives lines per the above format specification, and returns lines starting with OK or ERR indicating the validity of the request and optionally followed by additional keywords with more details. General result syntax: OK/ERR keyword=value ... Defined keywords: user= The users name (login also understood) password= The users password (for PROXYPASS login= cache_peer) message= Error message or similar used as %o in error messages (error also understood) log= String to be logged in access.log. Available as %ea in logformat specifications If protocol=3.0 (the default) then URL escaping is used to protect each value in both requests and responses. If using protocol=2.5 then all values need to be enclosed in quotes if they may contain whitespace, or the whitespace escaped using \. And quotes or \ characters within the keyword value must be \ escaped. When using the concurrency= option the protocol is changed by introducing a query channel tag infront of the request/response. The query channel tag is a number between 0 and concurrency-1. Compatibility Note: The children= option was named concurrency= in Squid-2.5.STABLE3 and earlier, and was accepted as an alias for the duration of the Squid-2.5 releases to keep compatibility. However, the meaning of concurrency= option has changed in Squid-2.6 to match that of Squid-3 and the old syntax no longer works. DOC_END NAME: acl TYPE: acl LOC: Config.aclList DEFAULT: none DOC_START Defining an Access List acl aclname acltype string1 ... acl aclname acltype "file" ... when using "file", the file should contain one item per line acltype is one of the types described below By default, regular expressions are CASE-SENSITIVE. To make them case-insensitive, use the -i option. acl aclname src ip-address/netmask ... (clients IP address) acl aclname src addr1-addr2/netmask ... (range of addresses) acl aclname dst ip-address/netmask ... (URL host's IP address) acl aclname myip ip-address/netmask ... (local socket IP address) acl aclname arp mac-address ... (xx:xx:xx:xx:xx:xx notation) # The arp ACL requires the special configure option --enable-arp-acl. # Furthermore, the arp ACL code is not portable to all operating systems. # It works on Linux, Solaris, FreeBSD and some other *BSD variants. # # NOTE: Squid can only determine the MAC address for clients that are on # the same subnet. If the client is on a different subnet, then Squid cannot # find out its MAC address. acl aclname srcdomain .foo.com ... # reverse lookup, client IP acl aclname dstdomain .foo.com ... # Destination server from URL acl aclname srcdom_regex [-i] xxx ... # regex matching client name acl aclname dstdom_regex [-i] xxx ... # regex matching server # For dstdomain and dstdom_regex a reverse lookup is tried if a IP # based URL is used and no match is found. The name "none" is used # if the reverse lookup fails. acl aclname time [day-abbrevs] [h1:m1-h2:m2] day-abbrevs: S - Sunday M - Monday T - Tuesday W - Wednesday H - Thursday F - Friday A - Saturday h1:m1 must be less than h2:m2 acl aclname url_regex [-i] ^http:// ... # regex matching on whole URL acl aclname urlpath_regex [-i] \.gif$ ... # regex matching on URL path acl aclname urllogin [-i] [^a-zA-Z0-9] ... # regex matching on URL login field acl aclname port 80 70 21 ... acl aclname port 0-1024 ... # ranges allowed acl aclname myport 3128 ... # (local socket TCP port) acl aclname proto HTTP FTP ... acl aclname method GET POST ... acl aclname browser [-i] regexp ... # pattern match on User-Agent header (see also req_header below) acl aclname referer_regex [-i] regexp ... # pattern match on Referer header # Referer is highly unreliable, so use with care acl aclname ident username ... acl aclname ident_regex [-i] pattern ... # string match on ident output. # use REQUIRED to accept any non-null ident. acl aclname src_as number ... acl aclname dst_as number ... # Except for access control, AS numbers can be used for # routing of requests to specific caches. Here's an # example for routing all requests for AS#1241 and only # those to mycache.mydomain.net: # acl asexample dst_as 1241 # cache_peer_access mycache.mydomain.net allow asexample # cache_peer_access mycache_mydomain.net deny all acl aclname proxy_auth [-i] username ... acl aclname proxy_auth_regex [-i] pattern ... # list of valid usernames # use REQUIRED to accept any valid username. # # NOTE: when a Proxy-Authentication header is sent but it is not # needed during ACL checking the username is NOT logged # in access.log. # # NOTE: proxy_auth requires a EXTERNAL authentication program # to check username/password combinations (see # auth_param directive). # # NOTE: proxy_auth can't be used in a transparent proxy as # the browser needs to be configured for using a proxy in order # to respond to proxy authentication. acl aclname snmp_community string ... # A community string to limit access to your SNMP Agent # Example: # # acl snmppublic snmp_community public acl aclname maxconn number # This will be matched when the client's IP address has # more than HTTP connections established. acl aclname max_user_ip [-s] number # This will be matched when the user attempts to log in from more # than different ip addresses. The authenticate_ip_ttl # parameter controls the timeout on the ip entries. # If -s is specified the limit is strict, denying browsing # from any further IP addresses until the ttl has expired. Without # -s Squid will just annoy the user by "randomly" denying requests. # (the counter is reset each time the limit is reached and a # request is denied) # NOTE: in acceleration mode or where there is mesh of child proxies, # clients may appear to come from multiple addresses if they are # going through proxy farms, so a limit of 1 may cause user problems. acl aclname req_mime_type mime-type1 ... # regex match against the mime type of the request generated # by the client. Can be used to detect file upload or some # types HTTP tunneling requests. # NOTE: This does NOT match the reply. You cannot use this # to match the returned file type. acl aclname req_header header-name [-i] any\.regex\.here # regex match against any of the known request headers. May be # thought of as a superset of "browser", "referer" and "mime-type" # ACLs. acl aclname rep_mime_type mime-type1 ... # regex match against the mime type of the reply received by # squid. Can be used to detect file download or some # types HTTP tunneling requests. # NOTE: This has no effect in http_access rules. It only has # effect in rules that affect the reply data stream such as # http_reply_access. acl aclname rep_header header-name [-i] any\.regex\.here # regex match against any of the known reply headers. May be # thought of as a superset of "browser", "referer" and "mime-type" # ACLs. # # Example: # # acl many_spaces rep_header Content-Disposition -i [[:space:]]{3,} acl acl_name external class_name [arguments...] # external ACL lookup via a helper class defined by the # external_acl_type directive. acl urlgroup group1 ... # match against the urlgroup as indicated by redirectors acl aclname user_cert attribute values... # match against attributes in a user SSL certificate # attribute is one of DN/C/O/CN/L/ST acl aclname ca_cert attribute values... # match against attributes a users issuing CA SSL certificate # attribute is one of DN/C/O/CN/L/ST acl aclname ext_user username ... acl aclname ext_user_regex [-i] pattern ... # string match on username returned by external acl helper # use REQUIRED to accept any non-null user name. Examples: acl macaddress arp 09:00:2b:23:45:67 acl myexample dst_as 1241 acl password proxy_auth REQUIRED acl fileupload req_mime_type -i ^multipart/form-data$ acl javascript rep_mime_type -i ^application/x-javascript$ NOCOMMENT_START #Recommended minimum configuration: acl all src 0.0.0.0/0.0.0.0 acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT NOCOMMENT_END DOC_END NAME: http_access TYPE: acl_access LOC: Config.accessList.http DEFAULT: none DEFAULT_IF_NONE: deny all DOC_START Allowing or Denying access based on defined access lists Access to the HTTP port: http_access allow|deny [!]aclname ... NOTE on default values: If there are no "access" lines present, the default is to deny the request. If none of the "access" lines cause a match, the default is the opposite of the last line in the list. If the last line was deny, the default is allow. Conversely, if the last line is allow, the default will be deny. For these reasons, it is a good idea to have an "deny all" or "allow all" entry at the end of your access lists to avoid potential confusion. NOCOMMENT_START #Recommended minimum configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to unknown ports http_access deny !Safe_ports # Deny CONNECT to other than SSL ports http_access deny CONNECT !SSL_ports # # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # Example rule allowing access from your local networks. Adapt # to list your (internal) IP networks from where browsing should # be allowed #acl our_networks src 192.168.1.0/24 192.168.2.0/24 #http_access allow our_networks # And finally deny all other access to this proxy http_access deny all NOCOMMENT_END DOC_END NAME: http_access2 TYPE: acl_access LOC: Config.accessList.http2 DEFAULT: none DOC_START Allowing or Denying access based on defined access lists Identical to http_access, but runs after redirectors. If not set then only http_access is used. DOC_END NAME: http_reply_access TYPE: acl_access LOC: Config.accessList.reply DEFAULT: none DEFAULT_IF_NONE: allow all DOC_START Allow replies to client requests. This is complementary to http_access. http_reply_access allow|deny [!] aclname ... NOTE: if there are no access lines present, the default is to allow all replies If none of the access lines cause a match the opposite of the last line will apply. Thus it is good practice to end the rules with an "allow all" or "deny all" entry. DOC_END NAME: icp_access TYPE: acl_access LOC: Config.accessList.icp DEFAULT: none DEFAULT_IF_NONE: deny all DOC_START Allowing or Denying access to the ICP port based on defined access lists icp_access allow|deny [!]aclname ... See http_access for details NOCOMMENT_START #Allow ICP queries from everyone icp_access allow all NOCOMMENT_END DOC_END NAME: htcp_access IFDEF: USE_HTCP TYPE: acl_access LOC: Config.accessList.htcp DEFAULT: none DEFAULT_IF_NONE: deny all DOC_START Allowing or Denying access to the HTCP port based on defined access lists htcp_access allow|deny [!]aclname ... See http_access for details NOTE: The default if no htcp_access lines are present is to deny all traffic. This default may cause problems with peers using the htcp or htcp-oldsquid options. #Allow HTCP queries from everyone htcp_access allow all DOC_END NAME: htcp_clr_access IFDEF: USE_HTCP TYPE: acl_access LOC: Config.accessList.htcp_clr DEFAULT: none DEFAULT_IF_NONE: deny all DOC_START Allowing or Denying access to purge content using HTCP based on defined access lists htcp_clr_access allow|deny [!]aclname ... See http_access for details #Allow HTCP CLR requests from trusted peers acl htcp_clr_peer src 172.16.1.2 htcp_clr_access allow htcp_clr_peer DOC_END NAME: miss_access TYPE: acl_access LOC: Config.accessList.miss DEFAULT: none DOC_START Use to force your neighbors to use you as a sibling instead of a parent. For example: acl localclients src 172.16.0.0/16 miss_access allow localclients miss_access deny !localclients This means only your local clients are allowed to fetch MISSES and all other clients can only fetch HITS. By default, allow all clients who passed the http_access rules to fetch MISSES from us. NOCOMMENT_START #Default setting: # miss_access allow all NOCOMMENT_END DOC_END NAME: ident_lookup_access TYPE: acl_access IFDEF: USE_IDENT DEFAULT: none DEFAULT_IF_NONE: deny all LOC: Config.accessList.identLookup DOC_START A list of ACL elements which, if matched, cause an ident (RFC931) lookup to be performed for this request. For example, you might choose to always perform ident lookups for your main multi-user Unix boxes, but not for your Macs and PCs. By default, ident lookups are not performed for any requests. To enable ident lookups for specific client addresses, you can follow this example: acl ident_aware_hosts src 198.168.1.0/255.255.255.0 ident_lookup_access allow ident_aware_hosts ident_lookup_access deny all Only src type ACL checks are fully supported. A src_domain ACL might work at times, but it will not always provide the correct result. DOC_END NAME: reply_body_max_size COMMENT: bytes deny acl acl... TYPE: body_size_t DEFAULT: none DEFAULT_IF_NONE: 0 allow all LOC: Config.ReplyBodySize DOC_START This option specifies the maximum size of a reply body in bytes. It can be used to prevent users from downloading very large files, such as MP3's and movies. When the reply headers are received, the reply_body_max_size lines are processed, and the first line with a result of "deny" is used as the maximum body size for this reply. This size is checked twice. First when we get the reply headers, we check the content-length value. If the content length value exists and is larger than the allowed size, the request is denied and the user receives an error message that says "the request or reply is too large." If there is no content-length, and the reply size exceeds this limit, the client's connection is just closed and they will receive a partial reply. WARNING: downstream caches probably can not detect a partial reply if there is no content-length header, so they will cache partial responses and give them out as hits. You should NOT use this option if you have downstream caches. If you set this parameter to zero (the default), there will be no limit imposed. DOC_END COMMENT_START OPTIONS FOR X-Forwarded-For ----------------------------------------------------------------------------- COMMENT_END NAME: follow_x_forwarded_for TYPE: acl_access IFDEF: FOLLOW_X_FORWARDED_FOR LOC: Config.accessList.followXFF DEFAULT: none DEFAULT_IF_NONE: deny all DOC_START Allowing or Denying the X-Forwarded-For header to be followed to find the original source of a request. Requests may pass through a chain of several other proxies before reaching us. The X-Forwarded-For header will contain a comma-separated list of the IP addresses in the chain, with the rightmost address being the most recent. If a request reaches us from a source that is allowed by this configuration item, then we consult the X-Forwarded-For header to see where that host received the request from. If the X-Forwarded-For header contains multiple addresses, and if acl_uses_indirect_client is on, then we continue backtracking until we reach an address for which we are not allowed to follow the X-Forwarded-For header, or until we reach the first address in the list. (If acl_uses_indirect_client is off, then it's impossible to backtrack through more than one level of X-Forwarded-For addresses.) The end result of this process is an IP address that we will refer to as the indirect client address. This address may be treated as the client address for access control, delay pools and logging, depending on the acl_uses_indirect_client, delay_pool_uses_indirect_client and log_uses_indirect_client options. SECURITY CONSIDERATIONS: Any host for which we follow the X-Forwarded-For header can place incorrect information in the header, and Squid will use the incorrect information as if it were the source address of the request. This may enable remote hosts to bypass any access control restrictions that are based on the client's source addresses. For example: acl localhost src 127.0.0.1 acl my_other_proxy srcdomain .proxy.example.com follow_x_forwarded_for allow localhost follow_x_forwarded_for allow my_other_proxy DOC_END NAME: acl_uses_indirect_client COMMENT: on|off TYPE: onoff IFDEF: FOLLOW_X_FORWARDED_FOR DEFAULT: on LOC: Config.onoff.acl_uses_indirect_client DOC_START Controls whether the indirect client address (see follow_x_forwarded_for) is used instead of the direct client address in acl matching. DOC_END NAME: delay_pool_uses_indirect_client COMMENT: on|off TYPE: onoff IFDEF: FOLLOW_X_FORWARDED_FOR && DELAY_POOLS DEFAULT: on LOC: Config.onoff.delay_pool_uses_indirect_client DOC_START Controls whether the indirect client address (see follow_x_forwarded_for) is used instead of the direct client address in delay pools. DOC_END NAME: log_uses_indirect_client COMMENT: on|off TYPE: onoff IFDEF: FOLLOW_X_FORWARDED_FOR DEFAULT: on LOC: Config.onoff.log_uses_indirect_client DOC_START Controls whether the indirect client address (see follow_x_forwarded_for) is used instead of the direct client address in the access log. DOC_END COMMENT_START NETWORK OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: http_port ascii_port TYPE: http_port_list DEFAULT: none LOC: Config.Sockaddr.http DOC_START Usage: port [options] hostname:port [options] 1.2.3.4:port [options] The socket addresses where Squid will listen for HTTP client requests. You may specify multiple socket addresses. There are three forms: port alone, hostname with port, and IP address with port. If you specify a hostname or IP address, Squid binds the socket to that specific address. This replaces the old 'tcp_incoming_address' option. Most likely, you do not need to bind to a specific address, so you can use the port number alone. If you are running Squid in accelerator mode, you probably want to listen on port 80 also, or instead. You may specify multiple socket addresses on multiple lines. Options: transparent Support for transparent interception of outgoing requests without browser settings. tproxy Support Linux TPROXY for spoofing outgoing connections using the client IP address. accel Accelerator mode. Also needs at least one of vhost/vport/defaultsite. defaultsite=domainname What to use for the Host: header if it is not present in a request. Determines what site (not origin server) accelerators should consider the default. Implies accel. vhost Accelerator mode using Host header for virtual domain support. Implies accel. vport Accelerator with IP based virtual host support. Implies accel. vport=NN As above, but uses specified port number rather than the http_port number. Implies accel. urlgroup= Default urlgroup to mark requests with (see also acl urlgroup and url_rewrite_program) protocol= Protocol to reconstruct accelerated requests with. Defaults to http. no-connection-auth Prevent forwarding of Microsoft connection oriented authentication (NTLM, Negotiate and Kerberos) If you run Squid on a dual-homed machine with an internal and an external interface we recommend you to specify the internal address:port in http_port. This way Squid will only be visible on the internal address. NOCOMMENT_START # Squid normally listens to port 3128 http_port @DEFAULT_HTTP_PORT@ NOCOMMENT_END DOC_END NAME: https_port IFDEF: USE_SSL TYPE: https_port_list DEFAULT: none LOC: Config.Sockaddr.https DOC_START Usage: [ip:]port cert=certificate.pem [key=key.pem] [options...] The socket address where Squid will listen for HTTPS client requests. This is really only useful for situations where you are running squid in accelerator mode and you want to do the SSL work at the accelerator level. You may specify multiple socket addresses on multiple lines, each with their own SSL certificate and/or options. Options: accel Accelerator mode. Also needs at least one of defaultsite or vhost. defaultsite= The name of the https site presented on this port. Implies accel. vhost Accelerator mode using Host header for virtual domain support. Requires a wildcard certificate or other certificate valid for more than one domain. Implies accel. urlgroup= Default urlgroup to mark requests with (see also acl urlgroup and url_rewrite_program). protocol= Protocol to reconstruct accelerated requests with. Defaults to https. cert= Path to SSL certificate (PEM format). key= Path to SSL private key file (PEM format) if not specified, the certificate file is assumed to be a combined certificate and key file. version= The version of SSL/TLS supported 1 automatic (default) 2 SSLv2 only 3 SSLv3 only 4 TLSv1 only cipher= Colon separated list of supported ciphers. options= Various SSL engine options. The most important being: NO_SSLv2 Disallow the use of SSLv2 NO_SSLv3 Disallow the use of SSLv3 NO_TLSv1 Disallow the use of TLSv1 SINGLE_DH_USE Always create a new key when using temporary/ephemeral DH key exchanges See src/ssl_support.c or OpenSSL SSL_CTX_set_options documentation for a complete list of options. clientca= File containing the list of CAs to use when requesting a client certificate. cafile= File containing additional CA certificates to use when verifying client certificates. If unset clientca will be used. capath= Directory containing additional CA certificates and CRL lists to use when verifying client certificates. crlfile= File of additional CRL lists to use when verifying the client certificate, in addition to CRLs stored in the capath. Implies VERIFY_CRL flag below. dhparams= File containing DH parameters for temporary/ephemeral DH key exchanges. sslflags= Various flags modifying the use of SSL: DELAYED_AUTH Don't request client certificates immediately, but wait until acl processing requires a certificate (not yet implemented). NO_DEFAULT_CA Don't use the default CA lists built in to OpenSSL. NO_SESSION_REUSE Don't allow for session reuse. Each connection will result in a new SSL session. VERIFY_CRL Verify CRL lists when accepting client certificates. VERIFY_CRL_ALL Verify CRL lists for all certificates in the client certificate chain. sslcontext= SSL session ID context identifier. vport Accelerator with IP based virtual host support. vport=NN As above, but uses specified port number rather than the https_port number. Implies accel. DOC_END NAME: tcp_outgoing_tos tcp_outgoing_ds tcp_outgoing_dscp TYPE: acl_tos DEFAULT: none LOC: Config.accessList.outgoing_tos DOC_START Allows you to select a TOS/Diffserv value to mark outgoing connections with, based on the username or source address making the request. tcp_outgoing_tos ds-field [!]aclname ... Example where normal_service_net uses the TOS value 0x00 and good_service_net uses 0x20 acl normal_service_net src 10.0.0.0/255.255.255.0 acl good_service_net src 10.0.1.0/255.255.255.0 tcp_outgoing_tos 0x00 normal_service_net tcp_outgoing_tos 0x20 good_service_net TOS/DSCP values really only have local significance - so you should know what you're specifying. For more information, see RFC2474 and RFC3260. The TOS/DSCP byte must be exactly that - a octet value 0 - 255, or "default" to use whatever default your host has. Note that in practice often only values 0 - 63 is usable as the two highest bits have been redefined for use by ECN (RFC3168). Processing proceeds in the order specified, and stops at first fully matching line. Note: The use of this directive using client dependent ACLs is incompatible with the use of server side persistent connections. To ensure correct results it is best to set server_persisten_connections to off when using this directive in such configurations. DOC_END NAME: tcp_outgoing_address TYPE: acl_address DEFAULT: none LOC: Config.accessList.outgoing_address DOC_START Allows you to map requests to different outgoing IP addresses based on the username or source address of the user making the request. tcp_outgoing_address ipaddr [[!]aclname] ... Example where requests from 10.0.0.0/24 will be forwarded with source address 10.1.0.1, 10.0.2.0/24 forwarded with source address 10.1.0.2 and the rest will be forwarded with source address 10.1.0.3. acl normal_service_net src 10.0.0.0/24 acl good_service_net src 10.0.1.0/24 10.0.2.0/24 tcp_outgoing_address 10.1.0.1 normal_service_net tcp_outgoing_address 10.1.0.2 good_service_net tcp_outgoing_address 10.1.0.3 Processing proceeds in the order specified, and stops at first fully matching line. Note: The use of this directive using client dependent ACLs is incompatible with the use of server side persistent connections. To ensure correct results it is best to set server_persistent_connections to off when using this directive in such configurations. DOC_END COMMENT_START SSL OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: ssl_unclean_shutdown IFDEF: USE_SSL TYPE: onoff DEFAULT: off LOC: Config.SSL.unclean_shutdown DOC_START Some browsers (especially MSIE) bugs out on SSL shutdown messages. DOC_END NAME: ssl_engine IFDEF: USE_SSL TYPE: string LOC: Config.SSL.ssl_engine DEFAULT: none DOC_START The OpenSSL engine to use. You will need to set this if you would like to use hardware SSL acceleration for example. DOC_END NAME: sslproxy_client_certificate IFDEF: USE_SSL DEFAULT: none LOC: Config.ssl_client.cert TYPE: string DOC_START Client SSL Certificate to use when proxying https:// URLs DOC_END NAME: sslproxy_client_key IFDEF: USE_SSL DEFAULT: none LOC: Config.ssl_client.key TYPE: string DOC_START Client SSL Key to use when proxying https:// URLs DOC_END NAME: sslproxy_version IFDEF: USE_SSL DEFAULT: 1 LOC: Config.ssl_client.version TYPE: int DOC_START SSL version level to use when proxying https:// URLs DOC_END NAME: sslproxy_options IFDEF: USE_SSL DEFAULT: none LOC: Config.ssl_client.options TYPE: string DOC_START SSL engine options to use when proxying https:// URLs DOC_END NAME: sslproxy_cipher IFDEF: USE_SSL DEFAULT: none LOC: Config.ssl_client.cipher TYPE: string DOC_START SSL cipher list to use when proxying https:// URLs DOC_END NAME: sslproxy_cafile IFDEF: USE_SSL DEFAULT: none LOC: Config.ssl_client.cafile TYPE: string DOC_START file containing CA certificates to use when verifying server certificates while proxying https:// URLs DOC_END NAME: sslproxy_capath IFDEF: USE_SSL DEFAULT: none LOC: Config.ssl_client.capath TYPE: string DOC_START directory containing CA certificates to use when verifying server certificates while proxying https:// URLs DOC_END NAME: sslproxy_flags IFDEF: USE_SSL DEFAULT: none LOC: Config.ssl_client.flags TYPE: string DOC_START Various flags modifying the use of SSL while proxying https:// URLs: DONT_VERIFY_PEER Accept certificates even if they fail to verify. NO_DEFAULT_CA Don't use the default CA list built in to OpenSSL. DOC_END NAME: sslpassword_program IFDEF: USE_SSL DEFAULT: none LOC: Config.Program.ssl_password TYPE: string DOC_START Specify a program used for entering SSL key passphrases when using encrypted SSL certificate keys. If not specified keys must either be unencrypted, or Squid started with the -N option to allow it to query interactively for the passphrase. DOC_END COMMENT_START OPTIONS WHICH AFFECT THE NEIGHBOR SELECTION ALGORITHM ----------------------------------------------------------------------------- COMMENT_END NAME: cache_peer TYPE: peer DEFAULT: none LOC: Config.peers DOC_START To specify other caches in a hierarchy, use the format: cache_peer hostname type http-port icp-port [options] For example, # proxy icp # hostname type port port options # -------------------- -------- ----- ----- ----------- cache_peer parent.foo.net parent 3128 3130 proxy-only default cache_peer sib1.foo.net sibling 3128 3130 proxy-only cache_peer sib2.foo.net sibling 3128 3130 proxy-only type: either 'parent', 'sibling', or 'multicast'. proxy-port: The port number where the cache listens for proxy requests. icp-port: Used for querying neighbor caches about objects. To have a non-ICP neighbor specify '7' for the ICP port and make sure the neighbor machine has the UDP echo port enabled in its /etc/inetd.conf file. NOTE: Also requires icp_port option enabled to send/receive requests via this method. options: proxy-only weight=n ttl=n no-query default round-robin carp multicast-responder closest-only no-digest no-netdb-exchange no-delay login=user:password | PASS | *:password connect-timeout=nn digest-url=url allow-miss max-conn=n htcp htcp-oldsquid originserver userhash sourcehash name=xxx monitorurl=url monitorsize=sizespec monitorinterval=seconds monitortimeout=seconds forceddomain=name ssl sslcert=/path/to/ssl/certificate sslkey=/path/to/ssl/key sslversion=1|2|3|4 sslcipher=... ssloptions=... front-end-https[=on|auto] connection-auth[=on|off|auto] use 'proxy-only' to specify objects fetched from this cache should not be saved locally. use 'weight=n' to affect the selection of a peer during any weighted peer-selection mechanisms. The weight must be an integer; default is 1, larger weights are favored more. This option does not affect parent selection if a peering protocol is not in use. use 'ttl=n' to specify a IP multicast TTL to use when sending an ICP queries to this address. Only useful when sending to a multicast group. Because we don't accept ICP replies from random hosts, you must configure other group members as peers with the 'multicast-responder' option below. use 'no-query' to NOT send ICP queries to this neighbor. use 'default' if this is a parent cache which can be used as a "last-resort" if a peer cannot be located by any of the peer-selection mechanisms. If specified more than once, only the first is used. use 'round-robin' to define a set of parents which should be used in a round-robin fashion in the absence of any ICP queries. use 'carp' to define a set of parents which should be used as a CARP array. The requests will be distributed among the parents based on the CARP load balancing hash function based on their weight. 'multicast-responder' indicates the named peer is a member of a multicast group. ICP queries will not be sent directly to the peer, but ICP replies will be accepted from it. 'closest-only' indicates that, for ICP_OP_MISS replies, we'll only forward CLOSEST_PARENT_MISSes and never FIRST_PARENT_MISSes. use 'no-digest' to NOT request cache digests from this neighbor. 'no-netdb-exchange' disables requesting ICMP RTT database (NetDB) from the neighbor. use 'no-delay' to prevent access to this neighbor from influencing the delay pools. use 'login=user:password' if this is a personal/workgroup proxy and your parent requires proxy authentication. Note: The string can include URL escapes (i.e. %20 for spaces). This also means % must be written as %%. use 'login=PASS' if users must authenticate against the upstream proxy or in the case of a reverse proxy configuration, the origin web server. This will pass the users credentials as they are to the peer. Note: To combine this with local authentication the Basic authentication scheme must be used, and both servers must share the same user database as HTTP only allows for a single login (one for proxy, one for origin server). Also be warned this will expose your users proxy password to the peer. USE WITH CAUTION use 'login=*:password' to pass the username to the upstream cache, but with a fixed password. This is meant to be used when the peer is in another administrative domain, but it is still needed to identify each user. The star can optionally be followed by some extra information which is added to the username. This can be used to identify this proxy to the peer, similar to the login=username:password option above. use 'connect-timeout=nn' to specify a peer specific connect timeout (also see the peer_connect_timeout directive) use 'digest-url=url' to tell Squid to fetch the cache digest (if digests are enabled) for this host from the specified URL rather than the Squid default location. use 'allow-miss' to disable Squid's use of only-if-cached when forwarding requests to siblings. This is primarily useful when icp_hit_stale is used by the sibling. To extensive use of this option may result in forwarding loops, and you should avoid having two-way peerings with this option. (for example to deny peer usage on requests from peer by denying cache_peer_access if the source is a peer) use 'max-conn=n' to limit the amount of connections Squid may open to this peer. use 'htcp' to send HTCP, instead of ICP, queries to the neighbor. You probably also want to set the "icp port" to 4827 instead of 3130. You must also allow this Squid htcp_access and http_access in the peer Squid configuration. use 'htcp-oldsquid' to send HTCP to old Squid versions You must also allow this Squid htcp_access and http_access in the peer Squid configuration. 'originserver' causes this parent peer to be contacted as a origin server. Meant to be used in accelerator setups. use 'userhash' to load-balance amongst a set of parents based on the client proxy_auth or ident username. use 'sourcehash' to load-balance amongst a set of parents based on the client source ip. use 'name=xxx' if you have multiple peers on the same host but different ports. This name can be used to differentiate the peers in cache_peer_access and similar directives. use 'monitorurl=url' to have periodically request a given URL from the peer, and only consider the peer as alive if this monitoring is successful (default none) use 'monitorsize=min[-max]' to limit the size range of 'monitorurl' replies considered valid. Defaults to 0 to accept any size replies as valid. use 'monitorinterval=seconds' to change frequency of how often the peer is monitored with 'monitorurl' (default 300 for a 5 minute interval). If set to 0 then monitoring is disabled even if a URL is defined. use 'monitortimeout=seconds' to change the timeout of 'monitorurl'. Defaults to 'monitorinterval'. use 'forceddomain=name' to forcibly set the Host header of requests forwarded to this peer. Useful in accelerator setups where the server (peer) expects a certain domain name and using redirectors to feed this domain name is not feasible. use 'ssl' to indicate connections to this peer should be SSL/TLS encrypted. use 'sslcert=/path/to/ssl/certificate' to specify a client SSL certificate to use when connecting to this peer. use 'sslkey=/path/to/ssl/key' to specify the private SSL key corresponding to sslcert above. If 'sslkey' is not specified 'sslcert' is assumed to reference a combined file containing both the certificate and the key. use sslversion=1|2|3|4 to specify the SSL version to use when connecting to this peer 1 = automatic (default) 2 = SSL v2 only 3 = SSL v3 only 4 = TLS v1 only use sslcipher=... to specify the list of valid SSL ciphers to use when connecting to this peer. use ssloptions=... to specify various SSL engine options: NO_SSLv2 Disallow the use of SSLv2 NO_SSLv3 Disallow the use of SSLv3 NO_TLSv1 Disallow the use of TLSv1 See src/ssl_support.c or the OpenSSL documentation for a more complete list. use sslcafile=... to specify a file containing additional CA certificates to use when verifying the peer certificate. use sslcapath=... to specify a directory containing additional CA certificates to use when verifying the peer certificate. use sslcrlfile=... to specify a certificate revocation list file to use when verifying the peer certificate. use sslflags=... to specify various flags modifying the SSL implementation: DONT_VERIFY_PEER Accept certificates even if they fail to verify. NO_DEFAULT_CA Don't use the default CA list built in to OpenSSL. use ssldomain= to specify the peer name as advertised in it's certificate. Used for verifying the correctness of the received peer certificate. If not specified the peer hostname will be used. use front-end-https to enable the "Front-End-Https: On" header needed when using Squid as a SSL frontend in front of Microsoft OWA. See MS KB document Q307347 for details on this header. If set to auto the header will only be added if the request is forwarded as a https:// URL. use connection-auth=off to tell Squid that this peer does not support Microsoft connection oriented authentication, and any such challenges received from there should be ignored. Default is auto to automatically determine the status of the peer. DOC_END NAME: cache_peer_domain cache_host_domain TYPE: hostdomain DEFAULT: none LOC: none DOC_START Use to limit the domains for which a neighbor cache will be queried. Usage: cache_peer_domain cache-host domain [domain ...] cache_peer_domain cache-host !domain For example, specifying cache_peer_domain parent.foo.net .edu has the effect such that UDP query packets are sent to 'bigserver' only when the requested object exists on a server in the .edu domain. Prefixing the domain name with '!' means the cache will be queried for objects NOT in that domain. NOTE: * Any number of domains may be given for a cache-host, either on the same or separate lines. * When multiple domains are given for a particular cache-host, the first matched domain is applied. * Cache hosts with no domain restrictions are queried for all requests. * There are no defaults. * There is also a 'cache_peer_access' tag in the ACL section. DOC_END NAME: cache_peer_access TYPE: peer_access DEFAULT: none LOC: none DOC_START Similar to 'cache_peer_domain' but provides more flexibility by using ACL elements. cache_peer_access cache-host allow|deny [!]aclname ... The syntax is identical to 'http_access' and the other lists of ACL elements. See the comments for 'http_access' below, or the Squid FAQ (http://www.squid-cache.org/FAQ/FAQ-10.html). DOC_END NAME: neighbor_type_domain TYPE: hostdomaintype DEFAULT: none LOC: none DOC_START usage: neighbor_type_domain neighbor parent|sibling domain domain ... Modifying the neighbor type for specific domains is now possible. You can treat some domains differently than the the default neighbor type specified on the 'cache_peer' line. Normally it should only be necessary to list domains which should be treated differently because the default neighbor type applies for hostnames which do not match domains listed here. EXAMPLE: cache_peer cache.foo.org parent 3128 3130 neighbor_type_domain cache.foo.org sibling .com .net neighbor_type_domain cache.foo.org sibling .au .de DOC_END NAME: dead_peer_timeout COMMENT: (seconds) DEFAULT: 10 seconds TYPE: time_t LOC: Config.Timeout.deadPeer DOC_START This controls how long Squid waits to declare a peer cache as "dead." If there are no ICP replies received in this amount of time, Squid will declare the peer dead and not expect to receive any further ICP replies. However, it continues to send ICP queries, and will mark the peer as alive upon receipt of the first subsequent ICP reply. This timeout also affects when Squid expects to receive ICP replies from peers. If more than 'dead_peer' seconds have passed since the last ICP reply was received, Squid will not expect to receive an ICP reply on the next query. Thus, if your time between requests is greater than this timeout, you will see a lot of requests sent DIRECT to origin servers instead of to your parents. DOC_END NAME: hierarchy_stoplist TYPE: wordlist DEFAULT: none LOC: Config.hierarchy_stoplist DOC_START A list of words which, if found in a URL, cause the object to be handled directly by this cache. In other words, use this to not query neighbor caches for certain objects. You may list this option multiple times. Note: never_direct overrides this option. NOCOMMENT_START #We recommend you to use at least the following line. hierarchy_stoplist cgi-bin ? NOCOMMENT_END DOC_END COMMENT_START MEMORY CACHE OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: cache_mem COMMENT: (bytes) TYPE: b_size_t DEFAULT: 8 MB LOC: Config.memMaxSize DOC_START NOTE: THIS PARAMETER DOES NOT SPECIFY THE MAXIMUM PROCESS SIZE. IT ONLY PLACES A LIMIT ON HOW MUCH ADDITIONAL MEMORY SQUID WILL USE AS A MEMORY CACHE OF OBJECTS. SQUID USES MEMORY FOR OTHER THINGS AS WELL. SEE THE SQUID FAQ SECTION 8 FOR DETAILS. 'cache_mem' specifies the ideal amount of memory to be used for: * In-Transit objects * Hot Objects * Negative-Cached objects Data for these objects are stored in 4 KB blocks. This parameter specifies the ideal upper limit on the total size of 4 KB blocks allocated. In-Transit objects take the highest priority. In-transit objects have priority over the others. When additional space is needed for incoming data, negative-cached and hot objects will be released. In other words, the negative-cached and hot objects will fill up any unused space not needed for in-transit objects. If circumstances require, this limit will be exceeded. Specifically, if your incoming request rate requires more than 'cache_mem' of memory to hold in-transit objects, Squid will exceed this limit to satisfy the new requests. When the load decreases, blocks will be freed until the high-water mark is reached. Thereafter, blocks will be used to store hot objects. DOC_END NAME: maximum_object_size_in_memory COMMENT: (bytes) TYPE: b_size_t DEFAULT: 8 KB LOC: Config.Store.maxInMemObjSize DOC_START Objects greater than this size will not be attempted to kept in the memory cache. This should be set high enough to keep objects accessed frequently in memory to improve performance whilst low enough to keep larger objects from hoarding cache_mem. DOC_END NAME: memory_replacement_policy TYPE: removalpolicy LOC: Config.memPolicy DEFAULT: lru DOC_START The memory replacement policy parameter determines which objects are purged from memory when memory space is needed. See cache_replacement_policy for details. DOC_END COMMENT_START DISK CACHE OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: cache_replacement_policy TYPE: removalpolicy LOC: Config.replPolicy DEFAULT: lru DOC_START The cache replacement policy parameter determines which objects are evicted (replaced) when disk space is needed. lru : Squid's original list based LRU policy heap GDSF : Greedy-Dual Size Frequency heap LFUDA: Least Frequently Used with Dynamic Aging heap LRU : LRU policy implemented using a heap Applies to any cache_dir lines listed below this. The LRU policies keeps recently referenced objects. The heap GDSF policy optimizes object hit rate by keeping smaller popular objects in cache so it has a better chance of getting a hit. It achieves a lower byte hit rate than LFUDA though since it evicts larger (possibly popular) objects. The heap LFUDA policy keeps popular objects in cache regardless of their size and thus optimizes byte hit rate at the expense of hit rate since one large, popular object will prevent many smaller, slightly less popular objects from being cached. Both policies utilize a dynamic aging mechanism that prevents cache pollution that can otherwise occur with frequency-based replacement policies. NOTE: if using the LFUDA replacement policy you should increase the value of maximum_object_size above its default of 4096 KB to to maximize the potential byte hit rate improvement of LFUDA. For more information about the GDSF and LFUDA cache replacement policies see http://www.hpl.hp.com/techreports/1999/HPL-1999-69.html and http://fog.hpl.external.hp.com/techreports/98/HPL-98-173.html. DOC_END NAME: cache_dir TYPE: cachedir DEFAULT: none DEFAULT_IF_NONE: ufs @DEFAULT_SWAP_DIR@ 100 16 256 LOC: Config.cacheSwap DOC_START Usage: cache_dir Type Directory-Name Fs-specific-data [options] You can specify multiple cache_dir lines to spread the cache among different disk partitions. Type specifies the kind of storage system to use. Only "ufs" is built by default. To enable any of the other storage systems see the --enable-storeio configure option. 'Directory' is a top-level directory where cache swap files will be stored. If you want to use an entire disk for caching, this can be the mount-point directory. The directory must exist and be writable by the Squid process. Squid will NOT create this directory for you. Only using COSS, a raw disk device or a stripe file can be specified, but the configuration of the "cache_swap_log" tag is mandatory. The ufs store type: "ufs" is the old well-known Squid storage format that has always been there. cache_dir ufs Directory-Name Mbytes L1 L2 [options] 'Mbytes' is the amount of disk space (MB) to use under this directory. The default is 100 MB. Change this to suit your configuration. Do NOT put the size of your disk drive here. Instead, if you want Squid to use the entire disk drive, subtract 20% and use that value. 'Level-1' is the number of first-level subdirectories which will be created under the 'Directory'. The default is 16. 'Level-2' is the number of second-level subdirectories which will be created under each first-level directory. The default is 256. The aufs store type: "aufs" uses the same storage format as "ufs", utilizing POSIX-threads to avoid blocking the main Squid process on disk-I/O. This was formerly known in Squid as async-io. cache_dir aufs Directory-Name Mbytes L1 L2 [options] see argument descriptions under ufs above The diskd store type: "diskd" uses the same storage format as "ufs", utilizing a separate process to avoid blocking the main Squid process on disk-I/O. cache_dir diskd Directory-Name Mbytes L1 L2 [options] [Q1=n] [Q2=n] see argument descriptions under ufs above Q1 specifies the number of unacknowledged I/O requests when Squid stops opening new files. If this many messages are in the queues, Squid won't open new files. Default is 64 Q2 specifies the number of unacknowledged messages when Squid starts blocking. If this many messages are in the queues, Squid blocks until it receives some replies. Default is 72 When Q1 < Q2 (the default), the cache directory is optimized for lower response time at the expense of a decrease in hit ratio. If Q1 > Q2, the cache directory is optimized for higher hit ratio at the expense of an increase in response time. The coss store type: block-size=n defines the "block size" for COSS cache_dir's. Squid uses file numbers as block numbers. Since file numbers are limited to 24 bits, the block size determines the maximum size of the COSS partition. The default is 512 bytes, which leads to a maximum cache_dir size of 512<<24, or 8 GB. Note you should not change the COSS block size after Squid has written some objects to the cache_dir. overwrite-percent=n defines the percentage of disk that COSS must write to before a given object will be moved to the current stripe. A value of "n" closer to 100 will cause COSS to waste less disk space by having multiple copies of an object on disk, but will increase the chances of overwriting a popular object as COSS overwrites stripes. A value of "n" close to 0 will cause COSS to keep all current objects in the current COSS stripe at the expense of the hit rate. The default value of 50 will allow any given object to be stored on disk a maximum of 2 times. max-stripe-waste=n defines the maximum amount of space that COSS will waste in a given stripe (in bytes). When COSS writes data to disk, it will potentially waste up to "max-size" worth of disk space for each 1MB of data written. If "max-size" is set to a large value (ie >256k), this could potentially result in large amounts of wasted disk space. Setting this value to a lower value (ie 64k or 32k) will result in a COSS disk refusing to cache larger objects until the COSS stripe has been filled to within "max-stripe-waste" of the maximum size (1MB). membufs=n defines the number of "memory-only" stripes that COSS will use. When an cache hit is performed on a COSS stripe before COSS has reached the overwrite-percent value for that object, COSS will use a series of memory buffers to hold the object in while the data is sent to the client. This will define the maximum number of memory-only buffers that COSS will use. The default value is 10, which will use a maximum of 10MB of memory for buffers. maxfullbufs=n defines the maximum number of stripes a COSS partition will have in memory waiting to be freed (either because the disk is under load and the stripe is unwritten, or because clients are still transferring data from objects using the memory). In order to try and maintain a good hit rate under load, COSS will reserve the last 2 full stripes for object hits. (ie a COSS cache_dir will reject new objects when the number of full stripes is 2 less than maxfullbufs) The null store type: no options are allowed or required Common options: read-only, no new objects should be stored to this cache_dir min-size=n, refers to the min object size this storedir will accept. It's used to restrict a storedir to only store large objects (e.g. aufs) while other storedirs are optimized for smaller objects (e.g. COSS). Defaults to 0. max-size=n, refers to the max object size this storedir supports. It is used to initially choose the storedir to dump the object. Note: To make optimal use of the max-size limits you should order the cache_dir lines with the smallest max-size value first and the ones with no max-size specification last. Note that for coss, max-size must be less than COSS_MEMBUF_SZ (hard coded at 1 MB). DOC_END NAME: store_dir_select_algorithm TYPE: string LOC: Config.store_dir_select_algorithm DEFAULT: least-load DOC_START Set this to 'round-robin' as an alternative. DOC_END NAME: max_open_disk_fds TYPE: int LOC: Config.max_open_disk_fds DEFAULT: 0 DOC_START To avoid having disk as the I/O bottleneck Squid can optionally bypass the on-disk cache if more than this amount of disk file descriptors are open. A value of 0 indicates no limit. DOC_END NAME: minimum_object_size COMMENT: (bytes) TYPE: b_size_t DEFAULT: 0 KB LOC: Config.Store.minObjectSize DOC_START Objects smaller than this size will NOT be saved on disk. The value is specified in kilobytes, and the default is 0 KB, which means there is no minimum. DOC_END NAME: maximum_object_size COMMENT: (bytes) TYPE: b_size_t DEFAULT: 4096 KB LOC: Config.Store.maxObjectSize DOC_START Objects larger than this size will NOT be saved on disk. The value is specified in kilobytes, and the default is 4MB. If you wish to get a high BYTES hit ratio, you should probably increase this (one 32 MB object hit counts for 3200 10KB hits). If you wish to increase speed more than your want to save bandwidth you should leave this low. NOTE: if using the LFUDA replacement policy you should increase this value to maximize the byte hit rate improvement of LFUDA! See replacement_policy below for a discussion of this policy. DOC_END NAME: cache_swap_low COMMENT: (percent, 0-100) TYPE: int DEFAULT: 90 LOC: Config.Swap.lowWaterMark DOC_NONE NAME: cache_swap_high COMMENT: (percent, 0-100) TYPE: int DEFAULT: 95 LOC: Config.Swap.highWaterMark DOC_START The low- and high-water marks for cache object replacement. Replacement begins when the swap (disk) usage is above the low-water mark and attempts to maintain utilization near the low-water mark. As swap utilization gets close to high-water mark object eviction becomes more aggressive. If utilization is close to the low-water mark less replacement is done each time. Defaults are 90% and 95%. If you have a large cache, 5% could be hundreds of MB. If this is the case you may wish to set these numbers closer together. DOC_END COMMENT_START LOGFILE OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: logformat TYPE: logformat LOC: Config.Log.logformats DEFAULT: none DOC_START Usage: logformat Defines an access log format. The is a string with embedded % format codes % format codes all follow the same basic structure where all but the formatcode is optional. Output strings are automatically escaped as required according to their context and the output format modifiers are usually not needed, but can be specified if an explicit output format is desired. % ["|[|'|#] [-] [[0]width] [{argument}] formatcode " output in quoted string format [ output in squid text log format as used by log_mime_hdrs # output in URL quoted format ' output as-is - left aligned width field width. If starting with 0 the output is zero padded {arg} argument such as header name etc Format codes: >a Client source IP address >A Client FQDN >p Client source port h Request header. Optional header name argument on the format header[:[separator]element] h un User name ul User name from authentication ui User name from ident us User name from SSL ue User name from external acl helper Hs HTTP status code Ss Squid request status (TCP_MISS etc) Sh Squid hierarchy status (DEFAULT_PARENT etc) mt MIME content type rm Request method (GET/POST etc) ru Request URL rv Request protocol version ea Log string returned by external acl st Request size including HTTP headers st Request+Reply size including HTTP headers % a literal % character The default formats available (which do not need re-defining) are: logformat squid %ts.%03tu %6tr %>a %Ss/%03Hs %a %Ss/%03Hs %h] [%a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %a %ui %un [%tl] "%rm %ru HTTP/%rv" %Hs %h" "%{User-Agent}>h" %Ss:%Sh DOC_END NAME: access_log cache_access_log TYPE: access_log LOC: Config.Log.accesslogs DEFAULT: none DOC_START These files log client request activities. Has a line every HTTP or ICP request. The format is: access_log [ [acl acl ...]] access_log none [acl acl ...]] Will log to the specified file using the specified format (which must be defined in a logformat directive) those entries which match ALL the acl's specified (which must be defined in acl clauses). If no acl is specified, all requests will be logged to this file. To disable logging of a request use the filepath "none", in which case a logformat name should not be specified. To log the request via syslog specify a filepath of "syslog": access_log syslog[:facility.priority] [format [acl1 [acl2 ....]]] where facility could be any of: authpriv, daemon, local0 .. local7 or user. And priority could be any of: err, warning, notice, info, debug. Note: 2.6.STABLE14 and earlier only supports a slightly different and undocumented format with all uppercase LOG_FACILITY|LOG_PRIORITY NOCOMMENT_START access_log @DEFAULT_ACCESS_LOG@ squid NOCOMMENT_END DOC_END NAME: log_access TYPE: acl_access LOC: Config.accessList.log DEFAULT: none COMMENT: allow|deny acl acl... DOC_START This options allows you to control which requests gets logged to access.log (see access_log directive). Requests denied for logging will also not be accounted for in performance counters. DOC_END NAME: cache_log TYPE: string DEFAULT: @DEFAULT_CACHE_LOG@ LOC: Config.Log.log DOC_START Cache logging file. This is where general information about your cache's behavior goes. You can increase the amount of data logged to this file with the "debug_options" tag below. DOC_END NAME: cache_store_log TYPE: string DEFAULT: @DEFAULT_STORE_LOG@ LOC: Config.Log.store DOC_START Logs the activities of the storage manager. Shows which objects are ejected from the cache, and which objects are saved and for how long. To disable, enter "none". There are not really utilities to analyze this data, so you can safely disable it. DOC_END NAME: cache_swap_state cache_swap_log TYPE: string LOC: Config.Log.swap DEFAULT: none DOC_START Location for the cache "swap.state" file. This index file holds the metadata of objects saved on disk. It is used to rebuild the cache during startup. Normally this file resides in each 'cache_dir' directory, but you may specify an alternate pathname here. Note you must give a full filename, not just a directory. Since this is the index for the whole object list you CANNOT periodically rotate it! If %s can be used in the file name it will be replaced with a a representation of the cache_dir name where each / is replaced with '.'. This is needed to allow adding/removing cache_dir lines when cache_swap_log is being used. If have more than one 'cache_dir', and %s is not used in the name these swap logs will have names such as: cache_swap_log.00 cache_swap_log.01 cache_swap_log.02 The numbered extension (which is added automatically) corresponds to the order of the 'cache_dir' lines in this configuration file. If you change the order of the 'cache_dir' lines in this file, these index files will NOT correspond to the correct 'cache_dir' entry (unless you manually rename them). We recommend you do NOT use this option. It is better to keep these index files in each 'cache_dir' directory. DOC_END NAME: logfile_rotate TYPE: int DEFAULT: 10 LOC: Config.Log.rotateNumber DOC_START Specifies the number of logfile rotations to make when you type 'squid -k rotate'. The default is 10, which will rotate with extensions 0 through 9. Setting logfile_rotate to 0 will disable the file name rotation, but the logfiles are still closed and re-opened. This will enable you to rename the logfiles yourself just before sending the rotate signal. Note, the 'squid -k rotate' command normally sends a USR1 signal to the running squid process. In certain situations (e.g. on Linux with Async I/O), USR1 is used for other purposes, so -k rotate uses another signal. It is best to get in the habit of using 'squid -k rotate' instead of 'kill -USR1 '. DOC_END NAME: emulate_httpd_log COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.common_log DOC_START The Cache can emulate the log file format which many 'httpd' programs use. To disable/enable this emulation, set emulate_httpd_log to 'off' or 'on'. The default is to use the native log format since it includes useful information Squid-specific log analyzers use. DOC_END NAME: log_ip_on_direct COMMENT: on|off TYPE: onoff DEFAULT: on LOC: Config.onoff.log_ip_on_direct DOC_START Log the destination IP address in the hierarchy log tag when going direct. Earlier Squid versions logged the hostname here. If you prefer the old way set this to off. DOC_END NAME: mime_table TYPE: string DEFAULT: @DEFAULT_MIME_TABLE@ LOC: Config.mimeTablePathname DOC_START Pathname to Squid's MIME table. You shouldn't need to change this, but the default file contains examples and formatting information if you do. DOC_END NAME: log_mime_hdrs COMMENT: on|off TYPE: onoff LOC: Config.onoff.log_mime_hdrs DEFAULT: off DOC_START The Cache can record both the request and the response MIME headers for each HTTP transaction. The headers are encoded safely and will appear as two bracketed fields at the end of the access log (for either the native or httpd-emulated log formats). To enable this logging set log_mime_hdrs to 'on'. DOC_END NAME: useragent_log TYPE: string LOC: Config.Log.useragent DEFAULT: none IFDEF: USE_USERAGENT_LOG DOC_START Squid will write the User-Agent field from HTTP requests to the filename specified here. By default useragent_log is disabled. DOC_END NAME: referer_log referrer_log TYPE: string LOC: Config.Log.referer DEFAULT: none IFDEF: USE_REFERER_LOG DOC_START Squid will write the Referer field from HTTP requests to the filename specified here. By default referer_log is disabled. Note that "referer" is actually a misspelling of "referrer" however the misspelt version has been accepted into the HTTP RFCs and we accept both. DOC_END NAME: pid_filename TYPE: string DEFAULT: @DEFAULT_PID_FILE@ LOC: Config.pidFilename DOC_START A filename to write the process-id to. To disable, enter "none". DOC_END NAME: debug_options TYPE: eol DEFAULT: ALL,1 LOC: Config.debugOptions DOC_START Logging options are set as section,level where each source file is assigned a unique section. Lower levels result in less output, Full debugging (level 9) can result in a very large log file, so be careful. The magic word "ALL" sets debugging levels for all sections. We recommend normally running with "ALL,1". DOC_END NAME: log_fqdn COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.log_fqdn DOC_START Turn this on if you wish to log fully qualified domain names in the access.log. To do this Squid does a DNS lookup of all IP's connecting to it. This can (in some situations) increase latency, which makes your cache seem slower for interactive browsing. DOC_END NAME: client_netmask TYPE: address LOC: Config.Addrs.client_netmask DEFAULT: 255.255.255.255 DOC_START A netmask for client addresses in logfiles and cachemgr output. Change this to protect the privacy of your cache clients. A netmask of 255.255.255.0 will log all IP's in that range with the last digit set to '0'. DOC_END NAME: forward_log IFDEF: WIP_FWD_LOG TYPE: string DEFAULT: none LOC: Config.Log.forward DOC_START Logs the server-side requests. This is currently work in progress. DOC_END NAME: strip_query_terms TYPE: onoff LOC: Config.onoff.strip_query_terms DEFAULT: on DOC_START By default, Squid strips query terms from requested URLs before logging. This protects your user's privacy. DOC_END NAME: buffered_logs COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.buffered_logs DOC_START cache.log log file is written with stdio functions, and as such it can be buffered or unbuffered. By default it will be unbuffered. Buffering it can speed up the writing slightly (though you are unlikely to need to worry unless you run with tons of debugging enabled in which case performance will suffer badly anyway..). DOC_END COMMENT_START OPTIONS FOR FTP GATEWAYING ----------------------------------------------------------------------------- COMMENT_END NAME: ftp_user TYPE: string DEFAULT: Squid@ LOC: Config.Ftp.anon_user DOC_START If you want the anonymous login password to be more informative (and enable the use of picky ftp servers), set this to something reasonable for your domain, like [email protected] The reason why this is domainless by default is the request can be made on the behalf of a user in any domain, depending on how the cache is used. Some ftp server also validate the email address is valid (for example perl.com). DOC_END NAME: ftp_list_width TYPE: int DEFAULT: 32 LOC: Config.Ftp.list_width DOC_START Sets the width of ftp listings. This should be set to fit in the width of a standard browser. Setting this too small can cut off long filenames when browsing ftp sites. DOC_END NAME: ftp_passive TYPE: onoff DEFAULT: on LOC: Config.Ftp.passive DOC_START If your firewall does not allow Squid to use passive connections, turn off this option. DOC_END NAME: ftp_sanitycheck TYPE: onoff DEFAULT: on LOC: Config.Ftp.sanitycheck DOC_START For security and data integrity reasons Squid by default performs sanity checks of the addresses of FTP data connections ensure the data connection is to the requested server. If you need to allow FTP connections to servers using another IP address for the data connection turn this off. DOC_END NAME: ftp_telnet_protocol TYPE: onoff DEFAULT: on LOC: Config.Ftp.telnet DOC_START The FTP protocol is officially defined to use the telnet protocol as transport channel for the control connection. However, many implementations are broken and does not respect this aspect of the FTP protocol. If you have trouble accessing files with ASCII code 255 in the path or similar problems involving this ASCII code you can try setting this directive to off. If that helps, report to the operator of the FTP server in question that their FTP server is broken and does not follow the FTP standard. DOC_END COMMENT_START OPTIONS FOR EXTERNAL SUPPORT PROGRAMS ----------------------------------------------------------------------------- COMMENT_END NAME: diskd_program TYPE: string DEFAULT: @DEFAULT_DISKD@ LOC: Config.Program.diskd DOC_START Specify the location of the diskd executable. Note this is only useful if you have compiled in diskd as one of the store io modules. DOC_END NAME: unlinkd_program IFDEF: USE_UNLINKD TYPE: string DEFAULT: @DEFAULT_UNLINKD@ LOC: Config.Program.unlinkd DOC_START Specify the location of the executable for file deletion process. DOC_END NAME: pinger_program TYPE: string DEFAULT: @DEFAULT_PINGER@ LOC: Config.Program.pinger IFDEF: USE_ICMP DOC_START Specify the location of the executable for the pinger process. DOC_END COMMENT_START OPTIONS FOR URL REWRITING ----------------------------------------------------------------------------- COMMENT_END NAME: url_rewrite_program redirect_program TYPE: programline LOC: Config.Program.url_rewrite.command DEFAULT: none DOC_START Specify the location of the executable for the URL rewriter. Since they can perform almost any function there isn't one included. For each requested URL rewriter will receive on line with the format URL client_ip "/" fqdn user method urlgroup And the rewriter may return a rewritten URL. The other components of the request line does not need to be returned (ignored if they are). The rewriter can also indicate that a client-side redirect should be performed to the new URL. This is done by prefixing the returned URL with "301:" (moved permanently) or 302: (moved temporarily). It can also return a "urlgroup" that can subsequently be matched in cache_peer_access and similar ACL driven rules. An urlgroup is returned by prefixing the returned URL with "!urlgroup!". By default, a URL rewriter is not used. DOC_END NAME: url_rewrite_children redirect_children TYPE: int DEFAULT: 5 LOC: Config.Program.url_rewrite.children DOC_START The number of redirector processes to spawn. If you start too few Squid will have to wait for them to process a backlog of URLs, slowing it down. If you start too many they will use RAM and other system resources. DOC_END NAME: url_rewrite_concurrency redirect_concurrency TYPE: int DEFAULT: 0 LOC: Config.Program.url_rewrite.concurrency DOC_START The number of requests each redirector helper can handle in parallel. Defaults to 0 which indicates the redirector is a old-style single threaded redirector. When this directive is set to a value >= 1 then the protocol used to communicate with the helper is modified to include a request ID in front of the request/response. The request ID from the request must be echoed back with the response to that request. DOC_END NAME: url_rewrite_host_header redirect_rewrites_host_header TYPE: onoff DEFAULT: on LOC: Config.onoff.redir_rewrites_host DOC_START By default Squid rewrites any Host: header in redirected requests. If you are running an accelerator this may not be a wanted effect of a redirector. WARNING: Entries are cached on the result of the URL rewriting process, so be careful if you have domain-virtual hosts. DOC_END NAME: url_rewrite_access redirector_access TYPE: acl_access DEFAULT: none LOC: Config.accessList.url_rewrite DOC_START If defined, this access list specifies which requests are sent to the redirector processes. By default all requests are sent. DOC_END NAME: redirector_bypass TYPE: onoff LOC: Config.onoff.redirector_bypass DEFAULT: off DOC_START When this is 'on', a request will not go through the redirector if all redirectors are busy. If this is 'off' and the redirector queue grows too large, Squid will exit with a FATAL error and ask you to increase the number of redirectors. You should only enable this if the redirectors are not critical to your caching system. If you use redirectors for access control, and you enable this option, users may have access to pages they should not be allowed to request. DOC_END NAME: location_rewrite_program TYPE: programline LOC: Config.Program.location_rewrite.command DEFAULT: none DOC_START Specify the location of the executable for the Location rewriter, used to rewrite server generated redirects. Usually used in conjunction with a url_rewrite_program For each Location header received the location rewriter will receive one line with the format: location URL requested URL urlgroup And the rewriter may return a rewritten Location URL or a blank line. The other components of the request line does not need to be returned (ignored if they are). By default, a Location rewriter is not used. DOC_END NAME: location_rewrite_children TYPE: int DEFAULT: 5 LOC: Config.Program.location_rewrite.children DOC_START The number of location rewriting processes to spawn. If you start too few Squid will have to wait for them to process a backlog of URLs, slowing it down. If you start too many they will use RAM and other system resources. DOC_END NAME: location_rewrite_concurrency TYPE: int DEFAULT: 0 LOC: Config.Program.location_rewrite.concurrency DOC_START The number of requests each Location rewriter helper can handle in parallel. Defaults to 0 which indicates that the helper is a old-style singlethreaded helper. DOC_END NAME: location_rewrite_access TYPE: acl_access DEFAULT: none LOC: Config.accessList.location_rewrite DOC_START If defined, this access list specifies which requests are sent to the location rewriting processes. By default all Location headers are sent. DOC_END COMMENT_START OPTIONS FOR TUNING THE CACHE ----------------------------------------------------------------------------- COMMENT_END NAME: cache no_cache TYPE: acl_access DEFAULT: none LOC: Config.accessList.noCache DOC_START A list of ACL elements which, if matched, cause the request to not be satisfied from the cache and the reply to not be cached. In other words, use this to force certain objects to never be cached. You must use the word 'DENY' to indicate the ACL names which should NOT be cached. Default is to allow all to be cached NOCOMMENT_START #We recommend you to use the following two lines. acl QUERY urlpath_regex cgi-bin \? cache deny QUERY NOCOMMENT_END DOC_END NAME: refresh_pattern TYPE: refreshpattern LOC: Config.Refresh DEFAULT: none DOC_START usage: refresh_pattern [-i] regex min percent max [options] By default, regular expressions are CASE-SENSITIVE. To make them case-insensitive, use the -i option. 'Min' is the time (in minutes) an object without an explicit expiry time should be considered fresh. The recommended value is 0, any higher values may cause dynamic applications to be erroneously cached unless the application designer has taken the appropriate actions. 'Percent' is a percentage of the objects age (time since last modification age) an object without explicit expiry time will be considered fresh. 'Max' is an upper limit on how long objects without an explicit expiry time will be considered fresh. options: override-expire override-lastmod reload-into-ims ignore-reload ignore-no-cache ignore-private ignore-auth override-expire enforces min age even if the server sent a Expires: header. Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. Note: this does not enforce staleness - it only extends freshness / min. If the server returns a Expires time which is longer than your max time, Squid will still consider the object fresh for that period of time. override-lastmod enforces min age even on objects that were modified recently. reload-into-ims changes client no-cache or ``reload'' to If-Modified-Since requests. Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. ignore-reload ignores a client no-cache or ``reload'' header. Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. ignore-no-cache ignores any ``Pragma: no-cache'' and ``Cache-control: no-cache'' headers received from a server. The HTTP RFC never allows the use of this (Pragma) header from a server, only a client, though plenty of servers send it anyway. ignore-private ignores any ``Cache-control: private'' headers received from a server. Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. ignore-auth caches responses to requests with authorization, as if the originserver had sent ``Cache-control: public'' in the response header. Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. Basically a cached object is: FRESH if expires < now, else STALE STALE if age > max FRESH if lm-factor < percent, else STALE FRESH if age < min else STALE The refresh_pattern lines are checked in the order listed here. The first entry which matches is used. If none of the entries match the default will be used. Note, you must uncomment all the default lines if you want to change one. The default setting is only active if none is used. Suggested default: NOCOMMENT_START refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 NOCOMMENT_END DOC_END NAME: quick_abort_min COMMENT: (KB) TYPE: kb_size_t DEFAULT: 16 KB LOC: Config.quickAbort.min DOC_NONE NAME: quick_abort_max COMMENT: (KB) TYPE: kb_size_t DEFAULT: 16 KB LOC: Config.quickAbort.max DOC_NONE NAME: quick_abort_pct COMMENT: (percent) TYPE: int DEFAULT: 95 LOC: Config.quickAbort.pct DOC_START The cache by default continues downloading aborted requests which are almost completed (less than 16 KB remaining). This may be undesirable on slow (e.g. SLIP) links and/or very busy caches. Impatient users may tie up file descriptors and bandwidth by repeatedly requesting and immediately aborting downloads. When the user aborts a request, Squid will check the quick_abort values to the amount of data transfered until then. If the transfer has less than 'quick_abort_min' KB remaining, it will finish the retrieval. If the transfer has more than 'quick_abort_max' KB remaining, it will abort the retrieval. If more than 'quick_abort_pct' of the transfer has completed, it will finish the retrieval. If you do not want any retrieval to continue after the client has aborted, set both 'quick_abort_min' and 'quick_abort_max' to '0 KB'. If you want retrievals to always continue if they are being cached set 'quick_abort_min' to '-1 KB'. DOC_END NAME: read_ahead_gap COMMENT: buffer-size TYPE: b_size_t LOC: Config.readAheadGap DEFAULT: 16 KB DOC_START The amount of data the cache will buffer ahead of what has been sent to the client when retrieving an object from another server. DOC_END NAME: negative_ttl COMMENT: time-units TYPE: time_t LOC: Config.negativeTtl DEFAULT: 5 minutes DOC_START Time-to-Live (TTL) for failed requests. Certain types of failures (such as "connection refused" and "404 Not Found") are negatively-cached for a configurable amount of time. The default is 5 minutes. Note that this is different from negative caching of DNS lookups. DOC_END NAME: positive_dns_ttl COMMENT: time-units TYPE: time_t LOC: Config.positiveDnsTtl DEFAULT: 6 hours DOC_START Upper limit on how long Squid will cache positive DNS responses. Default is 6 hours (360 minutes). This directive must be set larger than negative_dns_ttl. DOC_END NAME: negative_dns_ttl COMMENT: time-units TYPE: time_t LOC: Config.negativeDnsTtl DEFAULT: 1 minute DOC_START Time-to-Live (TTL) for negative caching of failed DNS lookups. This also sets the lower cache limit on positive lookups. Minimum value is 1 second, and it is not recommendable to go much below 10 seconds. DOC_END NAME: range_offset_limit COMMENT: (bytes) TYPE: b_size_t LOC: Config.rangeOffsetLimit DEFAULT: 0 KB DOC_START Sets a upper limit on how far into the the file a Range request may be to cause Squid to prefetch the whole file. If beyond this limit Squid forwards the Range request as it is and the result is NOT cached. This is to stop a far ahead range request (lets say start at 17MB) from making Squid fetch the whole object up to that point before sending anything to the client. A value of -1 causes Squid to always fetch the object from the beginning so it may cache the result. (2.0 style) A value of 0 causes Squid to never fetch more than the client requested. (default) DOC_END NAME: minimum_expiry_time COMMENT: (seconds) TYPE: time_t LOC: Config.minimum_expiry_time DEFAULT: 60 seconds DOC_START The minimum caching time according to (Expires - Date) Headers Squid honors if the object can't be revalidated defaults to 60 seconds. In reverse proxy enorinments it might be desirable to honor shorter object lifetimes. It is most likely better to make your server return a meaningful Last-Modified header however. DOC_END NAME: store_avg_object_size COMMENT: (kbytes) TYPE: kb_size_t DEFAULT: 13 KB LOC: Config.Store.avgObjectSize DOC_START Average object size, used to estimate number of objects your cache can hold. The default is 13 KB. DOC_END NAME: store_objects_per_bucket TYPE: int DEFAULT: 20 LOC: Config.Store.objectsPerBucket DOC_START Target number of objects per bucket in the store hash table. Lowering this value increases the total number of buckets and also the storage maintenance rate. The default is 20. DOC_END COMMENT_START HTTP OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: request_header_max_size COMMENT: (KB) TYPE: b_size_t DEFAULT: 20 KB LOC: Config.maxRequestHeaderSize DOC_START This specifies the maximum size for HTTP headers in a request. Request headers are usually relatively small (about 512 bytes). Placing a limit on the request header size will catch certain bugs (for example with persistent connections) and possibly buffer-overflow or denial-of-service attacks. DOC_END NAME: reply_header_max_size COMMENT: (KB) TYPE: b_size_t DEFAULT: 20 KB LOC: Config.maxReplyHeaderSize DOC_START This specifies the maximum size for HTTP headers in a reply. Reply headers are usually relatively small (about 512 bytes). Placing a limit on the reply header size will catch certain bugs (for example with persistent connections) and possibly buffer-overflow or denial-of-service attacks. DOC_END NAME: request_body_max_size COMMENT: (KB) TYPE: b_size_t DEFAULT: 0 KB LOC: Config.maxRequestBodySize DOC_START This specifies the maximum size for an HTTP request body. In other words, the maximum size of a PUT/POST request. A user who attempts to send a request with a body larger than this limit receives an "Invalid Request" error message. If you set this parameter to a zero (the default), there will be no limit imposed. DOC_END NAME: broken_posts TYPE: acl_access DEFAULT: none LOC: Config.accessList.brokenPosts DOC_START A list of ACL elements which, if matched, causes Squid to send an extra CRLF pair after the body of a PUT/POST request. Some HTTP servers has broken implementations of PUT/POST, and rely on an extra CRLF pair sent by some WWW clients. Quote from RFC2616 section 4.1 on this matter: Note: certain buggy HTTP/1.0 client implementations generate an extra CRLF's after a POST request. To restate what is explicitly forbidden by the BNF, an HTTP/1.1 client must not preface or follow a request with an extra CRLF. Example: acl buggy_server url_regex ^http://.... broken_posts allow buggy_server DOC_END NAME: via IFDEF: HTTP_VIOLATIONS COMMENT: on|off TYPE: onoff DEFAULT: on LOC: Config.onoff.via DOC_START If set (default), Squid will include a Via header in requests and replies as required by RFC2616. DOC_END NAME: cache_vary TYPE: onoff DEFAULT: on LOC: Config.onoff.cache_vary DOC_START When 'cache_vary' is set to off, response that have a Vary header will not be stored in the cache. DOC_END NAME: broken_vary_encoding TYPE: acl_access DEFAULT: none LOC: Config.accessList.vary_encoding DOC_START Many servers have broken support for on-the-fly Content-Encoding, returning the same ETag on both plain and gzip:ed variants. Vary replies matching this access list will have the cache split on the Accept-Encoding header of the request and not trusting the ETag to be unique. NOCOMMENT_START # Apache mod_gzip and mod_deflate known to be broken so don't trust # Apache to signal ETag correctly on such responses acl apache rep_header Server ^Apache broken_vary_encoding allow apache NOCOMMENT_END DOC_END NAME: collapsed_forwarding COMMENT: (on|off) TYPE: onoff LOC: Config.onoff.collapsed_forwarding DEFAULT: off DOC_START This option enables multiple requests for the same URI to be processed as one request. Normally disabled to avoid increased latency on dynamic content, but there can be benefit from enabling this in accelerator setups where the web servers are the bottleneck and reliable and returns mostly cacheable information. DOC_END NAME: refresh_stale_hit COMMENT: (time) TYPE: time_t DEFAULT: 0 seconds LOC: Config.refresh_stale_window DOC_START This option changes the refresh algorithm to allow concurrent requests while an object is being refreshed to be processed as cache hits if the object expired less than X seconds ago. Default is 0 to disable this feature. This option is mostly interesting in accelerator setups where a few objects is accessed very frequently. DOC_END NAME: ie_refresh COMMENT: on|off TYPE: onoff LOC: Config.onoff.ie_refresh DEFAULT: off DOC_START Microsoft Internet Explorer up until version 5.5 Service Pack 1 has an issue with transparent proxies, wherein it is impossible to force a refresh. Turning this on provides a partial fix to the problem, by causing all IMS-REFRESH requests from older IE versions to check the origin server for fresh content. This reduces hit ratio by some amount (~10% in my experience), but allows users to actually get fresh content when they want it. Note because Squid cannot tell if the user is using 5.5 or 5.5SP1, the behavior of 5.5 is unchanged from old versions of Squid (i.e. a forced refresh is impossible). Newer versions of IE will, hopefully, continue to have the new behavior and will be handled based on that assumption. This option defaults to the old Squid behavior, which is better for hit ratios but worse for clients using IE, if they need to be able to force fresh content. DOC_END NAME: vary_ignore_expire COMMENT: on|off TYPE: onoff LOC: Config.onoff.vary_ignore_expire DEFAULT: off DOC_START Many HTTP servers supporting Vary gives such objects immediate expiry time with no cache-control header when requested by a HTTP/1.0 client. This option enables Squid to ignore such expiry times until HTTP/1.1 is fully implemented. WARNING: This may eventually cause some varying objects not intended for caching to get cached. DOC_END NAME: extension_methods TYPE: extension_method LOC: RequestMethodStr DEFAULT: none DOC_START Squid only knows about standardized HTTP request methods. You can add up to 20 additional "extension" methods here. DOC_END NAME: request_entities TYPE: onoff LOC: Config.onoff.request_entities DEFAULT: off DOC_START Squid defaults to deny GET and HEAD requests with request entities, as the meaning of such requests are undefined in the HTTP standard even if not explicitly forbidden. Set this directive to on if you have clients which insists on sending request entities in GET or HEAD requests. But be warned that there is server software (both proxies and web servers) which can fail to properly process this kind of request which may make you vulnerable to cache pollution attacks if enabled. DOC_END NAME: header_access IFDEF: HTTP_VIOLATIONS TYPE: http_header_access[] LOC: Config.header_access DEFAULT: none DOC_START Usage: header_access header_name allow|deny [!]aclname ... WARNING: Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. This option replaces the old 'anonymize_headers' and the older 'http_anonymizer' option with something that is much more configurable. This new method creates a list of ACLs for each header, allowing you very fine-tuned header mangling. You can only specify known headers for the header name. Other headers are reclassified as 'Other'. You can also refer to all the headers with 'All'. For example, to achieve the same behavior as the old 'http_anonymizer standard' option, you should use: header_access From deny all header_access Referer deny all header_access Server deny all header_access User-Agent deny all header_access WWW-Authenticate deny all header_access Link deny all Or, to reproduce the old 'http_anonymizer paranoid' feature you should use: header_access Allow allow all header_access Authorization allow all header_access WWW-Authenticate allow all header_access Proxy-Authorization allow all header_access Proxy-Authenticate allow all header_access Cache-Control allow all header_access Content-Encoding allow all header_access Content-Length allow all header_access Content-Type allow all header_access Date allow all header_access Expires allow all header_access Host allow all header_access If-Modified-Since allow all header_access Last-Modified allow all header_access Location allow all header_access Pragma allow all header_access Accept allow all header_access Accept-Charset allow all header_access Accept-Encoding allow all header_access Accept-Language allow all header_access Content-Language allow all header_access Mime-Version allow all header_access Retry-After allow all header_access Title allow all header_access Connection allow all header_access Proxy-Connection allow all header_access All deny all By default, all headers are allowed (no anonymizing is performed). DOC_END NAME: header_replace IFDEF: HTTP_VIOLATIONS TYPE: http_header_replace[] LOC: Config.header_access DEFAULT: none DOC_START Usage: header_replace header_name message Example: header_replace User-Agent Nutscrape/1.0 (CP/M; 8-bit) This option allows you to change the contents of headers denied with header_access above, by replacing them with some fixed string. This replaces the old fake_user_agent option. By default, headers are removed if denied. DOC_END NAME: relaxed_header_parser COMMENT: on|off|warn TYPE: tristate LOC: Config.onoff.relaxed_header_parser DEFAULT: on DOC_START In the default "on" setting Squid accepts certain forms of non-compliant HTTP messages where it is unambiguous what the sending application intended even if the message is not correctly formatted. The messages is then normalized to the correct form when forwarded by Squid. If set to "warn" then a warning will be emitted in cache.log each time such HTTP error is encountered. If set to "off" then such HTTP errors will cause the request or response to be rejected. DOC_END COMMENT_START TIMEOUTS ----------------------------------------------------------------------------- COMMENT_END NAME: forward_timeout COMMENT: time-units TYPE: time_t LOC: Config.Timeout.forward DEFAULT: 4 minutes DOC_START This parameter specifies how long Squid should at most attempt in finding a forwarding path for the request before giving up. DOC_END NAME: connect_timeout COMMENT: time-units TYPE: time_t LOC: Config.Timeout.connect DEFAULT: 1 minute DOC_START This parameter specifies how long to wait for the TCP connect to the requested server or peer to complete before Squid should attempt to find another path where to forward the request. DOC_END NAME: peer_connect_timeout COMMENT: time-units TYPE: time_t LOC: Config.Timeout.peer_connect DEFAULT: 30 seconds DOC_START This parameter specifies how long to wait for a pending TCP connection to a peer cache. The default is 30 seconds. You may also set different timeout values for individual neighbors with the 'connect-timeout' option on a 'cache_peer' line. DOC_END NAME: read_timeout COMMENT: time-units TYPE: time_t LOC: Config.Timeout.read DEFAULT: 15 minutes DOC_START The read_timeout is applied on server-side connections. After each successful read(), the timeout will be extended by this amount. If no data is read again after this amount of time, the request is aborted and logged with ERR_READ_TIMEOUT. The default is 15 minutes. DOC_END NAME: request_timeout TYPE: time_t LOC: Config.Timeout.request DEFAULT: 5 minutes DOC_START How long to wait for an HTTP request after initial connection establishment. DOC_END NAME: persistent_request_timeout TYPE: time_t LOC: Config.Timeout.persistent_request DEFAULT: 2 minutes DOC_START How long to wait for the next HTTP request on a persistent connection after the previous request completes. DOC_END NAME: client_lifetime COMMENT: time-units TYPE: time_t LOC: Config.Timeout.lifetime DEFAULT: 1 day DOC_START The maximum amount of time a client (browser) is allowed to remain connected to the cache process. This protects the Cache from having a lot of sockets (and hence file descriptors) tied up in a CLOSE_WAIT state from remote clients that go away without properly shutting down (either because of a network failure or because of a poor client implementation). The default is one day, 1440 minutes. NOTE: The default value is intended to be much larger than any client would ever need to be connected to your cache. You should probably change client_lifetime only as a last resort. If you seem to have many client connections tying up filedescriptors, we recommend first tuning the read_timeout, request_timeout, persistent_request_timeout and quick_abort values. DOC_END NAME: half_closed_clients TYPE: onoff LOC: Config.onoff.half_closed_clients DEFAULT: on DOC_START Some clients may shutdown the sending side of their TCP connections, while leaving their receiving sides open. Sometimes, Squid can not tell the difference between a half-closed and a fully-closed TCP connection. By default, half-closed client connections are kept open until a read(2) or write(2) on the socket returns an error. Change this option to 'off' and Squid will immediately close client connections when read(2) returns "no more data to read." DOC_END NAME: pconn_timeout TYPE: time_t LOC: Config.Timeout.pconn DEFAULT: 1 minute DOC_START Timeout for idle persistent connections to servers and other proxies. DOC_END NAME: ident_timeout TYPE: time_t IFDEF: USE_IDENT LOC: Config.Timeout.ident DEFAULT: 10 seconds DOC_START Maximum time to wait for IDENT lookups to complete. If this is too high, and you enabled IDENT lookups from untrusted users, you might be susceptible to denial-of-service by having many ident requests going at once. DOC_END NAME: shutdown_lifetime COMMENT: time-units TYPE: time_t LOC: Config.shutdownLifetime DEFAULT: 30 seconds DOC_START When SIGTERM or SIGHUP is received, the cache is put into "shutdown pending" mode until all active sockets are closed. This value is the lifetime to set for all open descriptors during shutdown mode. Any active clients after this many seconds will receive a 'timeout' message. DOC_END COMMENT_START ADMINISTRATIVE PARAMETERS ----------------------------------------------------------------------------- COMMENT_END NAME: cache_mgr TYPE: string DEFAULT: webmaster LOC: Config.adminEmail DOC_START Email-address of local cache manager who will receive mail if the cache dies. The default is "webmaster". DOC_END NAME: mail_from TYPE: string DEFAULT: none LOC: Config.EmailFrom DOC_START From: email-address for mail sent when the cache dies. The default is to use 'appname@unique_hostname'. Default appname value is "squid", can be changed into src/globals.h before building squid. DOC_END NAME: mail_program TYPE: eol DEFAULT: mail LOC: Config.EmailProgram DOC_START Email program used to send mail if the cache dies. The default is "mail". The specified program must comply with the standard Unix mail syntax: mail-program recipient < mailfile Optional command line options can be specified. DOC_END NAME: cache_effective_user TYPE: string DEFAULT: nobody LOC: Config.effectiveUser DOC_START If you start Squid as root, it will change its effective/real UID/GID to the user specified below. The default is to change to UID to nobody. If you define cache_effective_user, but not cache_effective_group, Squid sets the GID to the effective user's default group ID (taken from the password file) and supplementary group list from the from groups membership of cache_effective_user. DOC_END NAME: cache_effective_group TYPE: string DEFAULT: none LOC: Config.effectiveGroup DOC_START If you want Squid to run with a specific GID regardless of the group memberships of the effective user then set this to the group (or GID) you want Squid to run as. When set all other group privileges of the effective user is ignored and only this GID is effective. If Squid is not started as root the user starting Squid must be member of the specified group. DOC_END NAME: httpd_suppress_version_string COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.httpd_suppress_version_string DOC_START Suppress Squid version string info in HTTP headers and HTML error pages. DOC_END NAME: visible_hostname TYPE: string LOC: Config.visibleHostname DEFAULT: none DOC_START If you want to present a special hostname in error messages, etc, define this. Otherwise, the return value of gethostname() will be used. If you have multiple caches in a cluster and get errors about IP-forwarding you must set them to have individual names with this setting. DOC_END NAME: unique_hostname TYPE: string LOC: Config.uniqueHostname DEFAULT: none DOC_START If you want to have multiple machines with the same 'visible_hostname' you must give each machine a different 'unique_hostname' so forwarding loops can be detected. DOC_END NAME: hostname_aliases TYPE: wordlist LOC: Config.hostnameAliases DEFAULT: none DOC_START A list of other DNS names your cache has. DOC_END NAME: umask TYPE: int LOC: Config.umask DEFAULT: 027 DOC_START Minimum umask which should be enforced while the proxy is running, in addition to the umask set at startup. Note: Should start with a 0 to indicate the normal octal representation of umasks DOC_END COMMENT_START OPTIONS FOR THE CACHE REGISTRATION SERVICE ----------------------------------------------------------------------------- This section contains parameters for the (optional) cache announcement service. This service is provided to help cache administrators locate one another in order to join or create cache hierarchies. An 'announcement' message is sent (via UDP) to the registration service by Squid. By default, the announcement message is NOT SENT unless you enable it with 'announce_period' below. The announcement message includes your hostname, plus the following information from this configuration file: http_port icp_port cache_mgr All current information is processed regularly and made available on the Web at http://www.ircache.net/Cache/Tracker/. COMMENT_END NAME: announce_period TYPE: time_t LOC: Config.Announce.period DEFAULT: 0 DOC_START This is how frequently to send cache announcements. The default is `0' which disables sending the announcement messages. To enable announcing your cache, just uncomment the line below. NOCOMMENT_START #To enable announcing your cache, just uncomment the line below. #announce_period 1 day NOCOMMENT_END DOC_END NAME: announce_host TYPE: string DEFAULT: tracker.ircache.net LOC: Config.Announce.host DOC_NONE NAME: announce_file TYPE: string DEFAULT: none LOC: Config.Announce.file DOC_NONE NAME: announce_port TYPE: ushort DEFAULT: 3131 LOC: Config.Announce.port DOC_START announce_host and announce_port set the hostname and port number where the registration message will be sent. Hostname will default to 'tracker.ircache.net' and port will default default to 3131. If the 'filename' argument is given, the contents of that file will be included in the announce message. DOC_END COMMENT_START HTTPD-ACCELERATOR OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: httpd_accel_no_pmtu_disc COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.accel_no_pmtu_disc DOC_START In many setups of transparently intercepting proxies Path-MTU discovery can not work on traffic towards the clients. This is the case when the intercepting device does not fully track connections and fails to forward ICMP must fragment messages to the cache server. If you have such setup and experience that certain clients sporadically hang or never complete requests set this to on. DOC_END COMMENT_START DELAY POOL PARAMETERS ----------------------------------------------------------------------------- COMMENT_END NAME: delay_pools TYPE: delay_pool_count DEFAULT: 0 IFDEF: DELAY_POOLS LOC: Config.Delay DOC_START This represents the number of delay pools to be used. For example, if you have one class 2 delay pool and one class 3 delays pool, you have a total of 2 delay pools. DOC_END NAME: delay_class TYPE: delay_pool_class DEFAULT: none IFDEF: DELAY_POOLS LOC: Config.Delay DOC_START This defines the class of each delay pool. There must be exactly one delay_class line for each delay pool. For example, to define two delay pools, one of class 2 and one of class 3, the settings above and here would be: Example: delay_pools 2 # 2 delay pools delay_class 1 2 # pool 1 is a class 2 pool delay_class 2 3 # pool 2 is a class 3 pool The delay pool classes are: class 1 Everything is limited by a single aggregate bucket. class 2 Everything is limited by a single aggregate bucket as well as an "individual" bucket chosen from bits 25 through 32 of the IP address. class 3 Everything is limited by a single aggregate bucket as well as a "network" bucket chosen from bits 17 through 24 of the IP address and a "individual" bucket chosen from bits 17 through 32 of the IP address. NOTE: If an IP address is a.b.c.d -> bits 25 through 32 are "d" -> bits 17 through 24 are "c" -> bits 17 through 32 are "c * 256 + d" DOC_END NAME: delay_access TYPE: delay_pool_access DEFAULT: none IFDEF: DELAY_POOLS LOC: Config.Delay DOC_START This is used to determine which delay pool a request falls into. delay_access is sorted per pool and the matching starts with pool 1, then pool 2, ..., and finally pool N. The first delay pool where the request is allowed is selected for the request. If it does not allow the request to any pool then the request is not delayed (default). For example, if you want some_big_clients in delay pool 1 and lotsa_little_clients in delay pool 2: Example: delay_access 1 allow some_big_clients delay_access 1 deny all delay_access 2 allow lotsa_little_clients delay_access 2 deny all DOC_END NAME: delay_parameters TYPE: delay_pool_rates DEFAULT: none IFDEF: DELAY_POOLS LOC: Config.Delay DOC_START This defines the parameters for a delay pool. Each delay pool has a number of "buckets" associated with it, as explained in the description of delay_class. For a class 1 delay pool, the syntax is: delay_parameters pool aggregate For a class 2 delay pool: delay_parameters pool aggregate individual For a class 3 delay pool: delay_parameters pool aggregate network individual The variables here are: pool a pool number - ie, a number between 1 and the number specified in delay_pools as used in delay_class lines. aggregate the "delay parameters" for the aggregate bucket (class 1, 2, 3). individual the "delay parameters" for the individual buckets (class 2, 3). network the "delay parameters" for the network buckets (class 3). A pair of delay parameters is written restore/maximum, where restore is the number of bytes (not bits - modem and network speeds are usually quoted in bits) per second placed into the bucket, and maximum is the maximum number of bytes which can be in the bucket at any time. For example, if delay pool number 1 is a class 2 delay pool as in the above example, and is being used to strictly limit each host to 64kbps (plus overheads), with no overall limit, the line is: delay_parameters 1 -1/-1 8000/8000 Note that the figure -1 is used to represent "unlimited". And, if delay pool number 2 is a class 3 delay pool as in the above example, and you want to limit it to a total of 256kbps (strict limit) with each 8-bit network permitted 64kbps (strict limit) and each individual host permitted 4800bps with a bucket maximum size of 64kb to permit a decent web page to be downloaded at a decent speed (if the network is not being limited due to overuse) but slow down large downloads more significantly: delay_parameters 2 32000/32000 8000/8000 600/8000 There must be one delay_parameters line for each delay pool. DOC_END NAME: delay_initial_bucket_level COMMENT: (percent, 0-100) TYPE: ushort DEFAULT: 50 IFDEF: DELAY_POOLS LOC: Config.Delay.initial DOC_START The initial bucket percentage is used to determine how much is put in each bucket when squid starts, is reconfigured, or first notices a host accessing it (in class 2 and class 3, individual hosts and networks only have buckets associated with them once they have been "seen" by squid). DOC_END COMMENT_START WCCPv1 AND WCCPv2 CONFIGURATION OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: wccp_router TYPE: address LOC: Config.Wccp.router DEFAULT: 0.0.0.0 IFDEF: USE_WCCP DOC_NONE NAME: wccp2_router TYPE: sockaddr_in_list LOC: Config.Wccp2.router DEFAULT: none IFDEF: USE_WCCPv2 DOC_START Use this option to define your WCCP ``home'' router for Squid. wccp_router supports a single WCCP(v1) router wccp2_router supports multiple WCCPv2 routers only one of the two may be used at the same time and defines which version of WCCP to use. DOC_END NAME: wccp_version TYPE: int LOC: Config.Wccp.version DEFAULT: 4 IFDEF: USE_WCCP DOC_START This directive is only relevant if you need to set up WCCP(v1) to some very old and end-of-life Cisco routers. In all other setups it must be left unset or at the default setting. It defines an internal version in the WCCP(v1) protocol, with version 4 being the officially documented protocol. According to some users, Cisco IOS 11.2 and earlier only support WCCP version 3. If you're using that or an earlier version of IOS, you may need to change this value to 3, otherwise do not specify this parameter. DOC_END NAME: wccp2_rebuild_wait TYPE: onoff LOC: Config.Wccp2.rebuildwait DEFAULT: on IFDEF: USE_WCCPv2 DOC_START If this is enabled Squid will wait for the cache dir rebuild to finish before sending the first wccp2 HereIAm packet DOC_END NAME: wccp2_forwarding_method TYPE: int LOC: Config.Wccp2.forwarding_method DEFAULT: 1 IFDEF: USE_WCCPv2 DOC_START WCCP2 allows the setting of forwarding methods between the router/switch and the cache. Valid values are as follows: 1 - GRE encapsulation (forward the packet in a GRE/WCCP tunnel) 2 - L2 redirect (forward the packet using Layer 2/MAC rewriting) Currently (as of IOS 12.4) cisco routers only support GRE. Cisco switches only support the L2 redirect assignment method. DOC_END NAME: wccp2_return_method TYPE: int LOC: Config.Wccp2.return_method DEFAULT: 1 IFDEF: USE_WCCPv2 DOC_START WCCP2 allows the setting of return methods between the router/switch and the cache for packets that the cache decides not to handle. Valid values are as follows: 1 - GRE encapsulation (forward the packet in a GRE/WCCP tunnel) 2 - L2 redirect (forward the packet using Layer 2/MAC rewriting) Currently (as of IOS 12.4) cisco routers only support GRE. Cisco switches only support the L2 redirect assignment. If the "ip wccp redirect exclude in" command has been enabled on the cache interface, then it is still safe for the proxy server to use a l2 redirect method even if this option is set to GRE. DOC_END NAME: wccp2_assignment_method TYPE: int LOC: Config.Wccp2.assignment_method DEFAULT: 1 IFDEF: USE_WCCPv2 DOC_START WCCP2 allows the setting of methods to assign the WCCP hash Valid values are as follows: 1 - Hash assignment 2 - Mask assignment As a general rule, cisco routers support the hash assignment method and cisco switches support the mask assignment method. DOC_END NAME: wccp2_service TYPE: wccp2_service LOC: Config.Wccp2.info DEFAULT: none DEFAULT_IF_NONE: standard 0 IFDEF: USE_WCCPv2 DOC_START WCCP2 allows for multiple traffic services. There are two types: "standard" and "dynamic". The standard type defines one service id - http (id 0). The dynamic service ids can be from 51 to 255 inclusive. In order to use a dynamic service id one must define the type of traffic to be redirected; this is done using the wccp2_service_info option. The "standard" type does not require a wccp2_service_info option, just specifying the service id will suffice. MD5 service authentication can be enabled by adding "password=" to the end of this service declaration. Examples: wccp2_service standard 0 # for the 'web-cache' standard service wccp2_service dynamic 80 # a dynamic service type which will be # fleshed out with subsequent options. wccp2_service standard 0 password=foo DOC_END NAME: wccp2_service_info TYPE: wccp2_service_info LOC: Config.Wccp2.info DEFAULT: none IFDEF: USE_WCCPv2 DOC_START Dynamic WCCPv2 services require further information to define the traffic you wish to have diverted. The format is: wccp2_service_info protocol= flags=,.. priority= ports=,.. The relevant WCCPv2 flags: + src_ip_hash, dst_ip_hash + source_port_hash, dst_port_hash + src_ip_alt_hash, dst_ip_alt_hash + src_port_alt_hash, dst_port_alt_hash + ports_source The port list can be one to eight entries. Example: wccp2_service_info 80 protocol=tcp flags=src_ip_hash,ports_source priority=240 ports=80 Note: the service id must have been defined by a previous 'wccp2_service dynamic ' entry. DOC_END NAME: wccp2_weight TYPE: int LOC: Config.Wccp2.weight DEFAULT: 10000 IFDEF: USE_WCCPv2 DOC_START Each cache server gets assigned a set of the destination hash proportional to their weight. DOC_END NAME: wccp_address TYPE: address LOC: Config.Wccp.address DEFAULT: 0.0.0.0 IFDEF: USE_WCCP DOC_NONE NAME: wccp2_address TYPE: address LOC: Config.Wccp2.address DEFAULT: 0.0.0.0 IFDEF: USE_WCCPv2 DOC_START Use this option if you require WCCP to use a specific interface address. The default behavior is to not bind to any specific address. DOC_END COMMENT_START PERSISTENT CONNECTION HANDLING ----------------------------------------------------------------------------- Also see "pconn_timeout" in the TIMEOUTS section COMMENT_END NAME: client_persistent_connections TYPE: onoff LOC: Config.onoff.client_pconns DEFAULT: on DOC_NONE NAME: server_persistent_connections TYPE: onoff LOC: Config.onoff.server_pconns DEFAULT: on DOC_START Persistent connection support for clients and servers. By default, Squid uses persistent connections (when allowed) with its clients and servers. You can use these options to disable persistent connections with clients and/or servers. DOC_END NAME: persistent_connection_after_error TYPE: onoff LOC: Config.onoff.error_pconns DEFAULT: off DOC_START With this directive the use of persistent connections after HTTP errors can be disabled. Useful if you have clients who fail to handle errors on persistent connections proper. DOC_END NAME: detect_broken_pconn TYPE: onoff LOC: Config.onoff.detect_broken_server_pconns DEFAULT: off DOC_START Some servers have been found to incorrectly signal the use of HTTP/1.0 persistent connections even on replies not compatible, causing significant delays. This server problem has mostly been seen on redirects. By enabling this directive Squid attempts to detect such broken replies and automatically assume the reply is finished after 10 seconds timeout. DOC_END COMMENT_START CACHE DIGEST OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: digest_generation IFDEF: USE_CACHE_DIGESTS TYPE: onoff LOC: Config.onoff.digest_generation DEFAULT: on DOC_START This controls whether the server will generate a Cache Digest of its contents. DOC_END NAME: digest_bits_per_entry IFDEF: USE_CACHE_DIGESTS TYPE: int LOC: Config.digest.bits_per_entry DEFAULT: 5 DOC_START This is the number of bits of the server's Cache Digest which will be associated with the Digest entry for a given HTTP Method and URL (public key) combination. The default is 5. DOC_END NAME: digest_rebuild_period IFDEF: USE_CACHE_DIGESTS COMMENT: (seconds) TYPE: time_t LOC: Config.digest.rebuild_period DEFAULT: 1 hour DOC_START This is the wait time between Cache Digest rebuilds. DOC_END NAME: digest_rewrite_period COMMENT: (seconds) IFDEF: USE_CACHE_DIGESTS TYPE: time_t LOC: Config.digest.rewrite_period DEFAULT: 1 hour DOC_START This is the wait time between Cache Digest writes to disk. DOC_END NAME: digest_swapout_chunk_size COMMENT: (bytes) TYPE: b_size_t IFDEF: USE_CACHE_DIGESTS LOC: Config.digest.swapout_chunk_size DEFAULT: 4096 bytes DOC_START This is the number of bytes of the Cache Digest to write to disk at a time. It defaults to 4096 bytes (4KB), the Squid default swap page. DOC_END NAME: digest_rebuild_chunk_percentage COMMENT: (percent, 0-100) IFDEF: USE_CACHE_DIGESTS TYPE: int LOC: Config.digest.rebuild_chunk_percentage DEFAULT: 10 DOC_START This is the percentage of the Cache Digest to be scanned at a time. By default it is set to 10% of the Cache Digest. DOC_END COMMENT_START SNMP OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: snmp_port TYPE: ushort LOC: Config.Port.snmp DEFAULT: 3401 IFDEF: SQUID_SNMP DOC_START Squid can now serve statistics and status information via SNMP. By default it listens to port 3401 on the machine. If you don't wish to use SNMP, set this to "0". DOC_END NAME: snmp_access TYPE: acl_access LOC: Config.accessList.snmp DEFAULT: none DEFAULT_IF_NONE: deny all IFDEF: SQUID_SNMP DOC_START Allowing or denying access to the SNMP port. All access to the agent is denied by default. usage: snmp_access allow|deny [!]aclname ... Example: snmp_access allow snmppublic localhost snmp_access deny all DOC_END NAME: snmp_incoming_address TYPE: address LOC: Config.Addrs.snmp_incoming DEFAULT: 0.0.0.0 IFDEF: SQUID_SNMP DOC_NONE NAME: snmp_outgoing_address TYPE: address LOC: Config.Addrs.snmp_outgoing DEFAULT: 255.255.255.255 IFDEF: SQUID_SNMP DOC_START Just like 'udp_incoming_address' above, but for the SNMP port. snmp_incoming_address is used for the SNMP socket receiving messages from SNMP agents. snmp_outgoing_address is used for SNMP packets returned to SNMP agents. The default snmp_incoming_address (0.0.0.0) is to listen on all available network interfaces. If snmp_outgoing_address is set to 255.255.255.255 (the default) it will use the same socket as snmp_incoming_address. Only change this if you want to have SNMP replies sent using another address than where this Squid listens for SNMP queries. NOTE, snmp_incoming_address and snmp_outgoing_address can not have the same value since they both use port 3401. DOC_END COMMENT_START ICP OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: icp_port udp_port TYPE: ushort DEFAULT: @DEFAULT_ICP_PORT@ LOC: Config.Port.icp DOC_START The port number where Squid sends and receives ICP queries to and from neighbor caches. Default is 3130. To disable use "0". May be overridden with -u on the command line. DOC_END NAME: htcp_port IFDEF: USE_HTCP TYPE: ushort DEFAULT: 4827 LOC: Config.Port.htcp DOC_START The port number where Squid sends and receives HTCP queries to and from neighbor caches. Default is 4827. To disable use "0". DOC_END NAME: log_icp_queries COMMENT: on|off TYPE: onoff DEFAULT: on LOC: Config.onoff.log_udp DOC_START If set, ICP queries are logged to access.log. You may wish do disable this if your ICP load is VERY high to speed things up or to simplify log analysis. DOC_END NAME: udp_incoming_address TYPE: address LOC:Config.Addrs.udp_incoming DEFAULT: 0.0.0.0 DOC_START udp_incoming_address is used for UDP packets received from other caches. The default behavior is to not bind to any specific address. Only change this if you want to have all UDP queries received on a specific interface/address. NOTE: udp_incoming_address is used by the ICP, HTCP, and DNS modules. Altering it will affect all of them in the same manner. see also; udp_outgoing_address NOTE, udp_incoming_address and udp_outgoing_address can not have the same value since they both use the same port. DOC_END NAME: udp_outgoing_address TYPE: address LOC: Config.Addrs.udp_outgoing DEFAULT: 255.255.255.255 DOC_START udp_outgoing_address is used for UDP packets sent out to other caches. The default behavior is to not bind to any specific address. Instead it will use the same socket as udp_incoming_address. Only change this if you want to have UDP queries sent using another address than where this Squid listens for UDP queries from other caches. NOTE: udp_outgoing_address is used by the ICP, HTCP, and DNS modules. Altering it will affect all of them in the same manner. see also; udp_incoming_address NOTE, udp_incoming_address and udp_outgoing_address can not have the same value since they both use the same port. DOC_END NAME: icp_hit_stale COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.icp_hit_stale DOC_START If you want to return ICP_HIT for stale cache objects, set this option to 'on'. If you have sibling relationships with caches in other administrative domains, this should be 'off'. If you only have sibling relationships with caches under your control, it is probably okay to set this to 'on'. If set to 'on', your siblings should use the option "allow-miss" on their cache_peer lines for connecting to you. DOC_END NAME: minimum_direct_hops TYPE: int DEFAULT: 4 LOC: Config.minDirectHops DOC_START If using the ICMP pinging stuff, do direct fetches for sites which are no more than this many hops away. DOC_END NAME: minimum_direct_rtt TYPE: int DEFAULT: 400 LOC: Config.minDirectRtt DOC_START If using the ICMP pinging stuff, do direct fetches for sites which are no more than this many rtt milliseconds away. DOC_END NAME: netdb_low TYPE: int DEFAULT: 900 LOC: Config.Netdb.low DOC_NONE NAME: netdb_high TYPE: int DEFAULT: 1000 LOC: Config.Netdb.high DOC_START The low and high water marks for the ICMP measurement database. These are counts, not percents. The defaults are 900 and 1000. When the high water mark is reached, database entries will be deleted until the low mark is reached. DOC_END NAME: netdb_ping_period TYPE: time_t LOC: Config.Netdb.period DEFAULT: 5 minutes DOC_START The minimum period for measuring a site. There will be at least this much delay between successive pings to the same network. The default is five minutes. DOC_END NAME: query_icmp COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.query_icmp DOC_START If you want to ask your peers to include ICMP data in their ICP replies, enable this option. If your peer has configured Squid (during compilation) with '--enable-icmp' that peer will send ICMP pings to origin server sites of the URLs it receives. If you enable this option the ICP replies from that peer will include the ICMP data (if available). Then, when choosing a parent cache, Squid will choose the parent with the minimal RTT to the origin server. When this happens, the hierarchy field of the access.log will be "CLOSEST_PARENT_MISS". This option is off by default. DOC_END NAME: test_reachability COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.test_reachability DOC_START When this is 'on', ICP MISS replies will be ICP_MISS_NOFETCH instead of ICP_MISS if the target host is NOT in the ICMP database, or has a zero RTT. DOC_END NAME: icp_query_timeout COMMENT: (msec) DEFAULT: 0 TYPE: int LOC: Config.Timeout.icp_query DOC_START Normally Squid will automatically determine an optimal ICP query timeout value based on the round-trip-time of recent ICP queries. If you want to override the value determined by Squid, set this 'icp_query_timeout' to a non-zero value. This value is specified in MILLISECONDS, so, to use a 2-second timeout (the old default), you would write: icp_query_timeout 2000 DOC_END NAME: maximum_icp_query_timeout COMMENT: (msec) DEFAULT: 2000 TYPE: int LOC: Config.Timeout.icp_query_max DOC_START Normally the ICP query timeout is determined dynamically. But sometimes it can lead to very large values (say 5 seconds). Use this option to put an upper limit on the dynamic timeout value. Do NOT use this option to always use a fixed (instead of a dynamic) timeout value. To set a fixed timeout see the 'icp_query_timeout' directive. DOC_END NAME: minimum_icp_query_timeout COMMENT: (msec) DEFAULT: 5 TYPE: int LOC: Config.Timeout.icp_query_min DOC_START Normally the ICP query timeout is determined dynamically. But sometimes it can lead to very small timeouts, even lower than the normal latency variance on your link due to traffic. Use this option to put an lower limit on the dynamic timeout value. Do NOT use this option to always use a fixed (instead of a dynamic) timeout value. To set a fixed timeout see the 'icp_query_timeout' directive. DOC_END COMMENT_START MULTICAST ICP OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: mcast_groups TYPE: wordlist LOC: Config.mcast_group_list DEFAULT: none DOC_START This tag specifies a list of multicast groups which your server should join to receive multicasted ICP queries. NOTE! Be very careful what you put here! Be sure you understand the difference between an ICP _query_ and an ICP _reply_. This option is to be set only if you want to RECEIVE multicast queries. Do NOT set this option to SEND multicast ICP (use cache_peer for that). ICP replies are always sent via unicast, so this option does not affect whether or not you will receive replies from multicast group members. You must be very careful to NOT use a multicast address which is already in use by another group of caches. If you are unsure about multicast, please read the Multicast chapter in the Squid FAQ (http://www.squid-cache.org/FAQ/). Usage: mcast_groups 239.128.16.128 224.0.1.20 By default, Squid doesn't listen on any multicast groups. DOC_END NAME: mcast_miss_addr IFDEF: MULTICAST_MISS_STREAM TYPE: address LOC: Config.mcast_miss.addr DEFAULT: 255.255.255.255 DOC_START If you enable this option, every "cache miss" URL will be sent out on the specified multicast address. Do not enable this option unless you are are absolutely certain you understand what you are doing. DOC_END NAME: mcast_miss_ttl IFDEF: MULTICAST_MISS_STREAM TYPE: ushort LOC: Config.mcast_miss.ttl DEFAULT: 16 DOC_START This is the time-to-live value for packets multicasted when multicasting off cache miss URLs is enabled. By default this is set to 'site scope', i.e. 16. DOC_END NAME: mcast_miss_port IFDEF: MULTICAST_MISS_STREAM TYPE: ushort LOC: Config.mcast_miss.port DEFAULT: 3135 DOC_START This is the port number to be used in conjunction with 'mcast_miss_addr'. DOC_END NAME: mcast_miss_encode_key IFDEF: MULTICAST_MISS_STREAM TYPE: string LOC: Config.mcast_miss.encode_key DEFAULT: XXXXXXXXXXXXXXXX DOC_START The URLs that are sent in the multicast miss stream are encrypted. This is the encryption key. DOC_END NAME: mcast_icp_query_timeout COMMENT: (msec) DEFAULT: 2000 TYPE: int LOC: Config.Timeout.mcast_icp_query DOC_START For multicast peers, Squid regularly sends out ICP "probes" to count how many other peers are listening on the given multicast address. This value specifies how long Squid should wait to count all the replies. The default is 2000 msec, or 2 seconds. DOC_END COMMENT_START INTERNAL ICON OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: icon_directory TYPE: string LOC: Config.icons.directory DEFAULT: @DEFAULT_ICON_DIR@ DOC_START Where the icons are stored. These are normally kept in @DEFAULT_ICON_DIR@ DOC_END NAME: global_internal_static TYPE: onoff LOC: Config.onoff.global_internal_static DEFAULT: on DOC_START This directive controls is Squid should intercept all requests for /squid-internal-static/ no matter which host the URL is requesting (default on setting), or if nothing special should be done for such URLs (off setting). The purpose of this directive is to make icons etc work better in complex cache hierarchies where it may not always be possible for all corners in the cache mesh to reach the server generating a directory listing. DOC_END NAME: short_icon_urls TYPE: onoff LOC: Config.icons.use_short_names DEFAULT: off DOC_START If this is enabled Squid will use short URLs for icons. If off the URLs for icons will always be absolute URLs including the proxy name and port. DOC_END COMMENT_START ERROR PAGE OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: error_directory TYPE: string LOC: Config.errorDirectory DEFAULT: @DEFAULT_ERROR_DIR@ DOC_START If you wish to create your own versions of the default (English) error files, either to customize them to suit your language or company copy the template English files to another directory and point this tag at them. The squid developers are interested in making squid available in a wide variety of languages. If you are making translations for a langauge that Squid does not currently provide please consider contributing your translation back to the project. DOC_END NAME: error_map TYPE: errormap LOC: Config.errorMapList DEFAULT: none DOC_START Map errors to custom messages error_map message_url http_status ... http_status ... is a list of HTTP status codes or Squid error messages. Use in accelerators to substitute the error messages returned by servers with other custom errors. error_map http://your.server/error/404.shtml 404 Requests for error messages is a GET request for the configured URL with the following special headers X-Error-Status: The received HTTP status code (i.e. 404) X-Request-URI: The requested URI where the error occurred In Addition the following headers are forwarded from the client request: User-Agent, Cookie, X-Forwarded-For, Via, Authorization, Accept, Referer And the following headers from the server reply: Server, Via, Location, Content-Location The reply returned to the client will carry the original HTTP headers from the real error message, but with the reply body of the configured error message. DOC_END NAME: err_html_text TYPE: eol LOC: Config.errHtmlText DEFAULT: none DOC_START HTML text to include in error messages. Make this a "mailto" URL to your admin address, or maybe just a link to your organizations Web page. To include this in your error messages, you must rewrite the error template files (found in the "errors" directory). Wherever you want the 'err_html_text' line to appear, insert a %L tag in the error template file. DOC_END NAME: deny_info TYPE: denyinfo LOC: Config.denyInfoList DEFAULT: none DOC_START Usage: deny_info err_page_name acl or deny_info http://... acl Example: deny_info ERR_CUSTOM_ACCESS_DENIED bad_guys This can be used to return a ERR_ page for requests which do not pass the 'http_access' rules. Squid remembers the last acl it evaluated in http_access, and if a 'deny_info' line exists for that ACL Squid returns a corresponding error page. The acl is typically the last acl on the http_access deny line which denied access. The exceptions to this rule are: - When Squid needs to request authentication credentials. It's then the first authentication related acl encountered - When none of the http_access lines matches. It's then the last acl processed on the last http_access line. You may use ERR_ pages that come with Squid or create your own pages and put them into the configured errors/ directory. Alternatively you can specify an error URL. The browsers will get redirected (302) to the specified URL. %s in the redirection URL will be replaced by the requested URL. Alternatively you can tell Squid to reset the TCP connection by specifying TCP_RESET. DOC_END COMMENT_START OPTIONS INFLUENCING REQUEST FORWARDING ----------------------------------------------------------------------------- COMMENT_END NAME: nonhierarchical_direct TYPE: onoff LOC: Config.onoff.nonhierarchical_direct DEFAULT: on DOC_START By default, Squid will send any non-hierarchical requests (matching hierarchy_stoplist or not cacheable request type) direct to origin servers. If you set this to off, Squid will prefer to send these requests to parents. Note that in most configurations, by turning this off you will only add latency to these request without any improvement in global hit ratio. If you are inside an firewall see never_direct instead of this directive. DOC_END NAME: prefer_direct TYPE: onoff LOC: Config.onoff.prefer_direct DEFAULT: off DOC_START Normally Squid tries to use parents for most requests. If you for some reason like it to first try going direct and only use a parent if going direct fails set this to on. By combining nonhierarchical_direct off and prefer_direct on you can set up Squid to use a parent as a backup path if going direct fails. Note: If you want Squid to use parents for all requests see the never_direct directive. prefer_direct only modifies how Squid acts on cacheable requests. DOC_END NAME: always_direct TYPE: acl_access LOC: Config.accessList.AlwaysDirect DEFAULT: none DOC_START Usage: always_direct allow|deny [!]aclname ... Here you can use ACL elements to specify requests which should ALWAYS be forwarded by Squid to the origin servers without using any peers. For example, to always directly forward requests for local servers ignoring any parents or siblings you may have use something like: acl local-servers dstdomain my.domain.net always_direct allow local-servers To always forward FTP requests directly, use acl FTP proto FTP always_direct allow FTP NOTE: There is a similar, but opposite option named 'never_direct'. You need to be aware that "always_direct deny foo" is NOT the same thing as "never_direct allow foo". You may need to use a deny rule to exclude a more-specific case of some other rule. Example: acl local-external dstdomain external.foo.net acl local-servers dstdomain .foo.net always_direct deny local-external always_direct allow local-servers NOTE: If your goal is to make the client forward the request directly to the origin server bypassing Squid then this needs to be done in the client configuration. Squid configuration can only tell Squid how Squid should fetch the object. NOTE: This directive is not related to caching. The replies is cached as usual even if you use always_direct. To not cache the replies see no_cache. This option replaces some v1.1 options such as local_domain and local_ip. DOC_END NAME: never_direct TYPE: acl_access LOC: Config.accessList.NeverDirect DEFAULT: none DOC_START Usage: never_direct allow|deny [!]aclname ... never_direct is the opposite of always_direct. Please read the description for always_direct if you have not already. With 'never_direct' you can use ACL elements to specify requests which should NEVER be forwarded directly to origin servers. For example, to force the use of a proxy for all requests, except those in your local domain use something like: acl local-servers dstdomain .foo.net acl all src 0.0.0.0/0.0.0.0 never_direct deny local-servers never_direct allow all or if Squid is inside a firewall and there are local intranet servers inside the firewall use something like: acl local-intranet dstdomain .foo.net acl local-external dstdomain external.foo.net always_direct deny local-external always_direct allow local-intranet never_direct allow all This option replaces some v1.1 options such as inside_firewall and firewall_ip. DOC_END COMMENT_START ADVANCED NETWORKING OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: incoming_icp_average TYPE: int DEFAULT: 6 LOC: Config.comm_incoming.icp_average DOC_NONE NAME: incoming_http_average TYPE: int DEFAULT: 4 LOC: Config.comm_incoming.http_average DOC_NONE NAME: incoming_dns_average TYPE: int DEFAULT: 4 LOC: Config.comm_incoming.dns_average DOC_NONE NAME: min_icp_poll_cnt TYPE: int DEFAULT: 8 LOC: Config.comm_incoming.icp_min_poll DOC_NONE NAME: min_dns_poll_cnt TYPE: int DEFAULT: 8 LOC: Config.comm_incoming.dns_min_poll DOC_NONE NAME: min_http_poll_cnt TYPE: int DEFAULT: 8 LOC: Config.comm_incoming.http_min_poll DOC_START Heavy voodoo here. I can't even believe you are reading this. Are you crazy? Don't even think about adjusting these unless you understand the algorithms in comm_select.c first! DOC_END NAME: tcp_recv_bufsize COMMENT: (bytes) TYPE: b_size_t DEFAULT: 0 bytes LOC: Config.tcpRcvBufsz DOC_START Size of receive buffer to set for TCP sockets. Probably just as easy to change your kernel's default. Set to zero to use the default buffer size. DOC_END COMMENT_START DNS OPTIONS ----------------------------------------------------------------------------- COMMENT_END NAME: check_hostnames TYPE: onoff DEFAULT: on LOC: Config.onoff.check_hostnames DOC_START For security and stability reasons Squid by default checks hostnames for Internet standard RFC compliance. If you do not want Squid to perform these checks then turn this directive off. DOC_END NAME: allow_underscore TYPE: onoff DEFAULT: on LOC: Config.onoff.allow_underscore DOC_START Underscore characters is not strictly allowed in Internet hostnames but nevertheless used by many sites. Set this to off if you want Squid to be strict about the standard. This check is performed only when check_hostnames is set to on. DOC_END NAME: cache_dns_program TYPE: string IFDEF: USE_DNSSERVERS DEFAULT: @DEFAULT_DNSSERVER@ LOC: Config.Program.dnsserver DOC_START Specify the location of the executable for dnslookup process. DOC_END NAME: dns_children TYPE: int IFDEF: USE_DNSSERVERS DEFAULT: 5 LOC: Config.dnsChildren DOC_START The number of processes spawn to service DNS name lookups. For heavily loaded caches on large servers, you should probably increase this value to at least 10. The maximum is 32. The default is 5. You must have at least one dnsserver process. DOC_END NAME: dns_retransmit_interval TYPE: time_t DEFAULT: 5 seconds LOC: Config.Timeout.idns_retransmit IFDEF: !USE_DNSSERVERS DOC_START Initial retransmit interval for DNS queries. The interval is doubled each time all configured DNS servers have been tried. DOC_END NAME: dns_timeout TYPE: time_t DEFAULT: 2 minutes LOC: Config.Timeout.idns_query IFDEF: !USE_DNSSERVERS DOC_START DNS Query timeout. If no response is received to a DNS query within this time all DNS servers for the queried domain are assumed to be unavailable. DOC_END NAME: dns_defnames COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.res_defnames DOC_START Normally the RES_DEFNAMES resolver option is disabled (see res_init(3)). This prevents caches in a hierarchy from interpreting single-component hostnames locally. To allow Squid to handle single-component names, enable this option. DOC_END NAME: dns_nameservers TYPE: wordlist DEFAULT: none LOC: Config.dns_nameservers DOC_START Use this if you want to specify a list of DNS name servers (IP addresses) to use instead of those given in your /etc/resolv.conf file. On Windows platforms, if no value is specified here or in the /etc/resolv.conf file, the list of DNS name servers are taken from the Windows registry, both static and dynamic DHCP configurations are supported. Example: dns_nameservers 10.0.0.1 192.172.0.4 DOC_END NAME: hosts_file TYPE: string DEFAULT: @DEFAULT_HOSTS@ LOC: Config.etcHostsPath DOC_START Location of the host-local IP name-address associations database. Most Operating Systems have such a file on different default locations: - Un*X & Linux: /etc/hosts - Windows NT/2000: %SystemRoot%\system32\drivers\etc\hosts (%SystemRoot% value install default is c:\winnt) - Windows XP/2003: %SystemRoot%\system32\drivers\etc\hosts (%SystemRoot% value install default is c:\windows) - Windows 9x/Me: %windir%\hosts (%windir% value is usually c:\windows) - Cygwin: /etc/hosts The file contains newline-separated definitions, in the form ip_address_in_dotted_form name [name ...] names are whitespace-separated. Lines beginning with an hash (#) character are comments. The file is checked at startup and upon configuration. If set to 'none', it won't be checked. If append_domain is used, that domain will be added to domain-local (i.e. not containing any dot character) host definitions. DOC_END NAME: dns_testnames TYPE: wordlist LOC: Config.dns_testname_list DEFAULT: none DEFAULT_IF_NONE: netscape.com internic.net nlanr.net microsoft.com DOC_START The DNS tests exit as soon as the first site is successfully looked up This test can be disabled with the -D command line option. DOC_END NAME: append_domain TYPE: string LOC: Config.appendDomain DEFAULT: none DOC_START Appends local domain name to hostnames without any dots in them. append_domain must begin with a period. Be warned there are now Internet names with no dots in them using only top-domain names, so setting this may cause some Internet sites to become unavailable. Example: append_domain .yourdomain.com DOC_END NAME: ignore_unknown_nameservers TYPE: onoff LOC: Config.onoff.ignore_unknown_nameservers DEFAULT: on DOC_START By default Squid checks that DNS responses are received from the same IP addresses they are sent to. If they don't match, Squid ignores the response and writes a warning message to cache.log. You can allow responses from unknown nameservers by setting this option to 'off'. DOC_END NAME: ipcache_size COMMENT: (number of entries) TYPE: int DEFAULT: 1024 LOC: Config.ipcache.size DOC_NONE NAME: ipcache_low COMMENT: (percent) TYPE: int DEFAULT: 90 LOC: Config.ipcache.low DOC_NONE NAME: ipcache_high COMMENT: (percent) TYPE: int DEFAULT: 95 LOC: Config.ipcache.high DOC_START The size, low-, and high-water marks for the IP cache. DOC_END NAME: fqdncache_size COMMENT: (number of entries) TYPE: int DEFAULT: 1024 LOC: Config.fqdncache.size DOC_START Maximum number of FQDN cache entries. DOC_END COMMENT_START MISCELLANEOUS ----------------------------------------------------------------------------- COMMENT_END NAME: memory_pools COMMENT: on|off TYPE: onoff DEFAULT: on LOC: Config.onoff.mem_pools DOC_START If set, Squid will keep pools of allocated (but unused) memory available for future use. If memory is a premium on your system and you believe your malloc library outperforms Squid routines, disable this. DOC_END NAME: memory_pools_limit COMMENT: (bytes) TYPE: b_size_t DEFAULT: 5 MB LOC: Config.MemPools.limit DOC_START Used only with memory_pools on: memory_pools_limit 50 MB If set to a non-zero value, Squid will keep at most the specified limit of allocated (but unused) memory in memory pools. All free() requests that exceed this limit will be handled by your malloc library. Squid does not pre-allocate any memory, just safe-keeps objects that otherwise would be free()d. Thus, it is safe to set memory_pools_limit to a reasonably high value even if your configuration will use less memory. If set to zero, Squid will keep all memory it can. That is, there will be no limit on the total amount of memory used for safe-keeping. To disable memory allocation optimization, do not set memory_pools_limit to 0. Set memory_pools to "off" instead. An overhead for maintaining memory pools is not taken into account when the limit is checked. This overhead is close to four bytes per object kept. However, pools may actually _save_ memory because of reduced memory thrashing in your malloc library. DOC_END NAME: forwarded_for COMMENT: on|off TYPE: onoff DEFAULT: on LOC: opt_forwarded_for DOC_START If set, Squid will include your system's IP address or name in the HTTP requests it forwards. By default it looks like this: X-Forwarded-For: 192.1.2.3 If you disable this, it will appear as X-Forwarded-For: unknown DOC_END NAME: cachemgr_passwd TYPE: cachemgrpasswd DEFAULT: none LOC: Config.passwd_list DOC_START Specify passwords for cachemgr operations. Usage: cachemgr_passwd password action action ... Some valid actions are (see cache manager menu for a full list): 5min 60min asndb authenticator cbdata client_list comm_incoming config * counters delay digest_stats dns events filedescriptors fqdncache histograms http_headers info io ipcache mem menu netdb non_peers objects offline_toggle * pconn peer_select redirector refresh server_list shutdown * store_digest storedir utilization via_headers vm_objects * Indicates actions which will not be performed without a valid password, others can be performed if not listed here. To disable an action, set the password to "disable". To allow performing an action without a password, set the password to "none". Use the keyword "all" to set the same password for all actions. Example: cachemgr_passwd secret shutdown cachemgr_passwd lesssssssecret info stats/objects cachemgr_passwd disable all DOC_END NAME: client_db COMMENT: on|off TYPE: onoff DEFAULT: on LOC: Config.onoff.client_db DOC_START If you want to disable collecting per-client statistics, turn off client_db here. DOC_END NAME: reload_into_ims IFDEF: HTTP_VIOLATIONS COMMENT: on|off TYPE: onoff DEFAULT: off LOC: Config.onoff.reload_into_ims DOC_START When you enable this option, client no-cache or ``reload'' requests will be changed to If-Modified-Since requests. Doing this VIOLATES the HTTP standard. Enabling this feature could make you liable for problems which it causes. see also refresh_pattern for a more selective approach. DOC_END NAME: maximum_single_addr_tries TYPE: int LOC: Config.retry.maxtries DEFAULT: 1 DOC_START This sets the maximum number of connection attempts for a host that only has one address (for multiple-address hosts, each address is tried once). The default value is one attempt, the (not recommended) maximum is 255 tries. A warning message will be generated if it is set to a value greater than ten. Note: This is in addition to the request re-forwarding which takes place if Squid fails to get a satisfying response. DOC_END NAME: retry_on_error TYPE: onoff LOC: Config.retry.onerror DEFAULT: off DOC_START If set to on Squid will automatically retry requests when receiving an error response. This is mainly useful if you are in a complex cache hierarchy to work around access control errors. DOC_END NAME: as_whois_server TYPE: string LOC: Config.as_whois_server DEFAULT: whois.ra.net DEFAULT_IF_NONE: whois.ra.net DOC_START WHOIS server to query for AS numbers. NOTE: AS numbers are queried only when Squid starts up, not for every request. DOC_END NAME: offline_mode TYPE: onoff LOC: Config.onoff.offline DEFAULT: off DOC_START Enable this option and Squid will never try to validate cached objects. DOC_END NAME: uri_whitespace TYPE: uri_whitespace LOC: Config.uri_whitespace DEFAULT: strip DOC_START What to do with requests that have whitespace characters in the URI. Options: strip: The whitespace characters are stripped out of the URL. This is the behavior recommended by RFC2396. deny: The request is denied. The user receives an "Invalid Request" message. allow: The request is allowed and the URI is not changed. The whitespace characters remain in the URI. Note the whitespace is passed to redirector processes if they are in use. encode: The request is allowed and the whitespace characters are encoded according to RFC1738. This could be considered a violation of the HTTP/1.1 RFC because proxies are not allowed to rewrite URI's. chop: The request is allowed and the URI is chopped at the first whitespace. This might also be considered a violation. DOC_END NAME: coredump_dir TYPE: string LOC: Config.coredump_dir DEFAULT: none DEFAULT_IF_NONE: none DOC_START By default Squid leaves core files in the directory from where it was started. If you set 'coredump_dir' to a directory that exists, Squid will chdir() to that directory at startup and coredump files will be left there. NOCOMMENT_START # Leave coredumps in the first cache dir coredump_dir @DEFAULT_SWAP_DIR@ NOCOMMENT_END DOC_END NAME: chroot TYPE: string LOC: Config.chroot_dir DEFAULT: none DOC_START Use this to have Squid do a chroot() while initializing. This also causes Squid to fully drop root privileges after initializing. This means, for example, if you use a HTTP port less than 1024 and try to reconfigure, you will may get an error saying that Squid can not open the port. DOC_END NAME: balance_on_multiple_ip TYPE: onoff LOC: Config.onoff.balance_on_multiple_ip DEFAULT: on DOC_START Some load balancing servers based on round robin DNS have been found not to preserve user session state across requests to different IP addresses. By default Squid rotates IP's per request. By disabling this directive only connection failure triggers rotation. DOC_END NAME: pipeline_prefetch TYPE: onoff LOC: Config.onoff.pipeline_prefetch DEFAULT: off DOC_START To boost the performance of pipelined requests to closer match that of a non-proxied environment Squid can try to fetch up to two requests in parallel from a pipeline. Defaults to off for bandwidth management and access logging reasons. DOC_END NAME: high_response_time_warning TYPE: int COMMENT: (msec) LOC: Config.warnings.high_rptm DEFAULT: 0 DOC_START If the one-minute median response time exceeds this value, Squid prints a WARNING with debug level 0 to get the administrators attention. The value is in milliseconds. DOC_END NAME: high_page_fault_warning TYPE: int LOC: Config.warnings.high_pf DEFAULT: 0 DOC_START If the one-minute average page fault rate exceeds this value, Squid prints a WARNING with debug level 0 to get the administrators attention. The value is in page faults per second. DOC_END NAME: high_memory_warning TYPE: b_size_t LOC: Config.warnings.high_memory DEFAULT: 0 KB DOC_START If the memory usage (as determined by mallinfo) exceeds this amount, Squid prints a WARNING with debug level 0 to get the administrators attention. DOC_END NAME: sleep_after_fork COMMENT: (microseconds) TYPE: int LOC: Config.sleep_after_fork DEFAULT: 0 DOC_START When this is set to a non-zero value, the main Squid process sleeps the specified number of microseconds after a fork() system call. This sleep may help the situation where your system reports fork() failures due to lack of (virtual) memory. Note, however, if you have a lot of child processes, these sleep delays will add up and your Squid will not service requests for some amount of time until all the child processes have been started. On Windows value less then 1000 (1 milliseconds) are rounded to 1000. DOC_END EOF
|
__label__pos
| 0.703655 |
%% The LaTeX package tcolorbox - version 6.0.3 (2023/03/17) %% tcolorbox-example-poster.tex: a poster example for tcolorbox %% %% ------------------------------------------------------------------------------------------- %% Copyright (c) 2006-2023 by Prof. Dr. Dr. Thomas F. Sturm %% ------------------------------------------------------------------------------------------- %% %% This work may be distributed and/or modified under the %% conditions of the LaTeX Project Public License, either version 1.3 %% of this license or (at your option) any later version. %% The latest version of this license is in %% http://www.latex-project.org/lppl.txt %% and version 1.3 or later is part of all distributions of LaTeX %% version 2005/12/01 or later. %% %% This work has the LPPL maintenance status `author-maintained'. %% %% This work consists of all files listed in README %% % arara: pdflatex: { shell: yes } % arara: pdflatex: { shell: yes } \documentclass[12pt]{article} \usepackage[a3paper,landscape]{geometry} \usepackage{lipsum} \usepackage{lmodern} \usepackage{enumerate} \usepackage[poster]{tcolorbox} \tcbuselibrary{minted} % <- replace by \tcbuselibrary{listings}, if minted does not work for you \pagestyle{empty} \begin{document} \begin{tcbposter}[ coverage = { spread, interior style={top color=yellow,bottom color=yellow!50!red}, watermark text={\LaTeX\ Poster}, watermark color=yellow, }, poster = {showframe=false,columns=4,rows=5}, boxes = { enhanced standard jigsaw,sharp corners=downhill,arc=3mm,boxrule=1mm, colback=white,opacityback=0.75,colframe=blue, title style={left color=black,right color=cyan}, fonttitle=\bfseries\Large\scshape } ] %---- \posterbox[blankest,interior engine=path,height=3cm, halign=center,valign=center,fontupper=\bfseries\large,colupper=red!25!black, underlay={ \node[right,inner sep=0pt,outer sep=0pt] at (frame.west) {\includegraphics[height=3cm]{pink_marble.png}}; \node[left,inner sep=0pt,outer sep=0pt] at (frame.east) {\includegraphics[height=3cm]{crinklepaper.png}}; }, ]{name=title,column=1,span=3,below=top}{ \resizebox{18cm}{!}{\bfseries\Huge My Important Project}\\[3mm] [email protected] } %---- \posterbox[adjusted title=References]{name=references,column=2,span=1.5,above=bottom}{ \begin{enumerate}[{[1]}] \item\label{litA} Important Authors, \textit{Important Title} \item\label{litB} More Important Authors, \textit{More Important Title} \item\label{litC} Less Important Authors, \textit{Less Important Title} \end{enumerate} } %---- \posterbox[adjusted title=Process,halign=center]{name=process,column=2,span=2,above=references}{ \begin{tikzpicture}[very thick,radius=2cm] \begin{scope} \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=30]; \end{scope} \begin{scope}[xshift=5cm] \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=70]; \end{scope} \begin{scope}[xshift=10cm] \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=110]; \end{scope} \begin{scope}[xshift=15cm] \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=240]; \end{scope} \end{tikzpicture} } %---- \posterbox[adjusted title=Project Description]{name=project, sequence=1 between title and bottom then 2 between title and process}{ See [\ref{litA}]: \lipsum[1] \begin{center} \tikz \draw[thick,rounded corners=8pt] (0,0)--(0,2)--(1,3.25)--(2,2)--(2,0)--(0,2)--(2,2)--(0,0)--(2,0); \quad by [\ref{litB}] \end{center} \lipsum[2-3]\par See [\ref{litC}]: \lipsum[4] \begin{center} \tikz \shadedraw [left color=red,right color=blue] (0,0) rectangle (2,2); \end{center} That's all. } %---- \posterbox[adjusted title=Central Picture, interior style={fill overzoom image=blueshade.png}] {name=picture,column=3,between=title and process}{} %---- \begin{posterboxenv}[adjusted title=Core Algorithm,leftupper=0pt,rightupper=0pt] {name=algorithm,column=4,between=top and references} \begin{tcblisting}{blankest,listing only} \begin{tikzpicture}[very thick,radius=2cm] \begin{scope} \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=30]; \end{scope} \begin{scope}[xshift=5cm] \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=70]; \end{scope} \begin{scope}[xshift=10cm] \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=110]; \end{scope} \begin{scope}[xshift=15cm] \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=240]; \end{scope} \end{tikzpicture} \begin{tikzpicture}[very thick,radius=1cm] \begin{scope} \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=30]; \end{scope} \begin{scope}[xshift=5cm] \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=70]; \end{scope} \begin{scope}[xshift=10cm] \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=110]; \end{scope} \begin{scope}[xshift=15cm] \path[draw=black,fill=white] (0,0) circle; \path[fill=red] (0,0) -- (2,0) arc [start angle=0, end angle=240]; \end{scope} \end{tikzpicture} \end{tcblisting} \end{posterboxenv} %---- \posterbox[adjusted title=Contact,fit,fit basedim=12pt] {name=contact,column*=4,span=1.5,between=process and bottom}{ \lipsum[2] } \end{tcbposter} \end{document}
|
__label__pos
| 1 |
2
$\begingroup$
I'm watching this video lecture http://ocw.mit.edu/courses/mathematics/18-02-multivariable-calculus-fall-2007/video-lectures/lecture-11-chain-rule/ and I'm stuck at around 3:40, I can't seem to figure out what he is doing.
He is showing how to derive $f(x)=\sin^{-1}(x)$.
At some point he goes from the expression $\frac{dy}{dx}=\frac{1}{\cos(y)}$ to $\frac{dy}{dx}= \frac{1}{\sqrt{1-x^2}}$.
Ok, I know that's the result, that's how I always did it, but I never actually derived it myself. So yeah, I'd like to know what he did in those last two steps, to get from the first expression to the second.
I know it may (will) be something completely stupid and I'll say 'oh... facepalm', but for some reason I can't figure out how he did it. I'm guessing the next logical step is to replace $y=\sin^{-1}(x)$, but then? I've never been in good terms with trigonometric functions and identities really, so I'd appreciate some enlightening.
Thank you.
$\endgroup$
3
$\begingroup$
Remember that (in a suitable interval) $\cos y = \sqrt{1-\sin^2 y}$, and that here $y = \arcsin x$, so $x = \sin y$ and $\cos y = \sqrt{1-x^2}$. $\frac{dy}{dx} = \frac1{\cos y}$ comes from the "formula" $\frac1{\frac{dx}{dy}} = \frac{dy}{dx}$.
$\endgroup$
0
$\begingroup$
Another way to deal with this, which is useful for obtaining all of the derivatives of the inverse trig functions, is to keep in mind that $\ y \ = \ \sin^{-1} x \ \Rightarrow \sin y \ = \ x \ = \frac{x}{1} \ $. You can now construct a right triangle with one of the angles being $y$ : the leg opposite $y$ has length $x$, and the hypotenuse, a length of $1$ . The leg adjacent to $y$ must then have length $\sqrt{1 - x^2}$ . Hence,
$$\cos y \ = \ \frac{\sqrt{1 - x^2}}{1} \ \Rightarrow \frac{d}{dx}\sin^{-1} x \ = \ \frac{1}{\sqrt{1 - x^2}} .$$
$\endgroup$
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.996225 |
≡ Menu
The Best Fraud Detection Companies of 2021 Your Business Can Count on to Protect Itself
Even despite all the convenience of credit card usage, there are still security gaps that result in data and money losses, as well as damages to business reputation. According to statistics, credit card fraud is #1 identity theft, and what’s more, it is on especially rise right now because of the pandemic and e-commerce sector rapid growth. That’s why protecting your personal data and finances from intruders with the help of the best fraud detection companies and machine learning in banking solutions offered by them becomes essential. Let’s find out how AI and ML may help on the way.
How Does Card Fraud Happen?
According to the Preventing Credit Card Fraud Through Pattern Recognition research, “As time progresses; the classification of credit card fraud types begins to form, and advanced fraud detection systems are developed to combat the cat and mouse game of credit card fraud. The awareness of the types of credit card fraud is crucial for the understanding of how algorithms detect fraud depending on the fraud type.”
Thus, there are several types of credit card fraud that may occur.
• Card-not-present (CNP) fraud. This is one of the most frequent types of credit card fraud since it doesn’t require the plastic card to be present. Card credentials and some personal information are enough to illegally use the card online.
• Counterfeit and skimming fraud. This approach refers to intercepting credit card details near the ATM or POS terminal.
• Lost and stolen card fraud. This is a very old fraud method that allows for using unprotected credit cards that were lost or stolen until they will be blocked by the owner or run out of money.
• Card-never arrived-fraud. Since most of the credit cards are still delivered by traditional mail, they may be stolen or intercepted before they arrive.
• False application fraud. This is one more widespread type of fraud. It happens when somebody uses your personal information to obtain a loan. As usual, such loans may be obtained from “loan sharks” companies that don’t care about the identity and real solvency of the borrower, and use illegal practices to get the loan back instead, while making the victim pay enormous interest rates.
How Does Machine Learning Detect Fraud?
Fortunately, machine learning is able to deal with most of the types of fraud we have described above. Smart systems work through a careful and immediate analysis of data, which allows them to find invisible anomalies, changed patterns, and make accurate inferences about whether a transaction is legitimate or fraudulent.
Compared to outdated defense methods, machine learning and artificial intelligence in banking offer a new and more competent approach.
AI/ML Fraud Detection vs Old Approaches
• Machine learning in banking is fast. Machine learning works in real-time, in contrast to outdated financial protection methods, which made it possible to detect fraudulent transactions after they had taken place.
• Artificial intelligence in banking is efficient. Because of the AI’s ability to deal with data in real-time and prevent fraudulent attempts, it becomes an efficient and money-saving tool for financial companies and their customers.
• The best fraud detection companies are secure. AI solutions provided by the best fraud detection companies have a high level of security and are carefully tailored to the specific business needs.
Traditional Fraud Detection Methods
What Are the Best Credit Card Fraud Detection Techniques with AI and ML?
Despite the fact that the usage of artificial intelligence and machine learning in banking may be explained quite simply, there are still purely technical credit card fraud detection techniques that ensure a machine learning system work seamlessly and do its job well.
• Naïve Bayes (NB). This technique is based on the theory of probability and the maximum likelihood estimation of a certain probability. The advantage of this model is that it does not need a lot of data to train efficiently enough and evaluate events with a high degree of accuracy.
• Support Vector Machines (SVM). It is a supervised machine learning model that is used for classification and regression analysis.
• K-Nearest Neighbor algorithms (KNN). This is one more technique that is used for classification and regression analysis.
How Can You Prevent Credit Card Fraud?
Prevent Credit Card Fraud
Of course, it is almost impossible to protect your business 100% since anti-fraud protection will always resemble a game of cat and mouse. That is, the more secure the solution you use, the more the hackers will make an effort to break through your line of defense. However, there are a number of rules that still apply.
1. Make sure all of your employees understand the importance of protecting your business from fraudulent intrusion. In 2021, insider attacks and hacking of devices that employees use to work from home are one of the cybersecurity trends.
2. Use a bank fraud detection machine learning solution as most fraud transactions can be stopped at the intent stage.
3. Be especially careful during critical loads on your website, for example, during the holidays or sales, as fraudsters use these opportunities, realizing that the business may not have enough resources to control all transactions without exception.
The Best Fraud Detection Companies
However, efficient fraud detection with AI and machine learning for banking and eCommerce will not be possible without the top-notch anti-fraud software provided by the market-leading companies below.
• ClearSale. This is a top-notch software aimed at providing merchants with ultimate protection against fraudulent transactions and chargebacks that may be suitable to small and enterprise-level businesses as well.
• Signifyd. The solution provided by this company is aimed at ensuring payment compliance, revenue protection, and abuse prevention.
• Riskified. This is not only a credit card fraud prevention solution. This is also an outstanding tool for improving the eCommerce sales funnel, leads capturing, and turning them into loyal customers. Risk prevention is one of the main goals of the software.
• Kount.This application is used by more than 9000+ companies worldwide and it significantly helped them reduce chargebacks, prevent friendly fraud and account takeover, as well as stay protected from other types of online scams.
Conclusion
Thus, there are a lot of good fraud detection companies that may provide you with high-class software for preventing any fraudulent activities, protecting your data and finances, and strengthening your reputation on the market. However, not all of them may be tailored to the specifics of your business as the custom anti-fraud solution may. Considering the anti-fraud protection relevance, using some of the AI-powered applications above or creating your own protective software from scratch becomes essential. SPD Group has vast experience in ML and AI development and always prioritize your data and money safety so you are welcome to get in touch and get more actionable insights on the ways to improve your anti-fraud protection.
Was this article helpful?
YesNo
|
__label__pos
| 0.510026 |
Understanding how javascript plays a role with css?
I was looking into creating a few navigation bars for practice. In todays life, been responsive is a must since our mobile devices are becoming irresistible.
I want to understand the terminology, not much for the code. I want a navigation bar that when it reaches a window width of 480px, it will condense the navigation bar to a simple button (hamburger icon).
Should i first design the navigation bar as if it was on a smaller device, then use javascript to check the size of the window and if the size is larger, to change the parameters of the css? Or should i simply just use @media and have that change without javascript?
The reason why im asking is because i see a ton of examples out there that utilize both javascript and css.
I just want to understand the terminology in plain english before i jump in and see what i necessary and what isn’t.
thank you!
simply just use @media and have that change without javascript
You can use css and media queries to change the navigation bar to a hamburger button at the prescribed width however you will still need to use use js so that you can then open the mobile navigation menu on click as mobiles don’t work very well with hover (there are css methods using the checkbox hack to get around this but in the end its still easier to add a little js to create the open and close of the mobile with a click action and indeed the checkbox hack is not well supported on android).
The menus themselves should be styled by css and you should use the same html for the mobile menu (in most cases) rather than duplicating html or creating another menu.
How can i approach it using javascript so that the onclick event toggles whenever i click the navigation toggle? I seen examples like:
However, most examples ive seen include jQuery and JQuery isn’t something i know and its something i don’t feel like partaking since i don’t really know Javascript too well to begin with.
This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.
|
__label__pos
| 0.684791 |
What is a physical server?
A physical server functions very similarly to a regular computer, just on a different scale. It’s a hardware server that includes memory, hard drive, network connectivity, and runs operating systems and applications off its internal hardware resources. A physical server can be designated to a single user (dedicated server) or can be used by multiple users (e.g. through virtualization, or other techniques that establish a type of access management and resource usage). All servers are physical in the end.
It’s often referred to as a bare-metal server since its hardware is used by an operating system rather than a virtualization platform.
Physical server
What’s the difference between a physical server and virtual server?
While a physical server is essentially a business-grade powerful computer, a virtual server is a software computer that emulates the processes of an actual physical computer. Virtual servers, unlike physical ones, operate in a multi-user environment, with multiple virtual servers running on the same physical hardware. Virtual servers are the virtualization of the computing resources of a physical server.
When getting a dedicated physical server makes sense
One of the most common reasons for using a physical server for a workload is to have that workload use all available resources. It’s a dedicated physical server just for one person. Additionally, if you manage or provide hosting for others as a vendor, it could be useful to purchase or rent the physical server and organize a hosting solution for client websites on it through virtualization or other architectures. Another reason to run a physical server is simply because some hardware that may be needed cannot be virtualized. There are also some platforms that are not virtualized, so in that case again the use of a physical server would make sense.
Pros and cons of a physical server
Some of the advantages of having a physical server are a user’s IT team having full access to the server at all times, particularly useful for high-demand operations. The server can also be fully customized as well by an IT team based on a user’s or business’ needs.
There are also a few disadvantages of using physical servers. They can be expensive, both in the initial investment and long-term, due to maintenance costs, updates, and even replacements in the case of hardware failures. Another important disadvantage to consider is downtime and that users cannot scale storage in small increments once the maximum workload is reached.
|
__label__pos
| 0.998278 |
linux/sound/soc/codecs/cs42l51.c
<<
>>
Prefs
1/*
2 * cs42l51.c
3 *
4 * ASoC Driver for Cirrus Logic CS42L51 codecs
5 *
6 * Copyright (c) 2010 Arnaud Patard <[email protected]>
7 *
8 * Based on cs4270.c - Copyright (c) Freescale Semiconductor
9 *
10 * This program is free software; you can redistribute it and/or modify
11 * it under the terms of the GNU General Public License version 2 as
12 * published by the Free Software Foundation.
13 *
14 * This program is distributed in the hope that it will be useful,
15 * but WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 * GNU General Public License for more details.
18 *
19 * For now:
20 * - Only I2C is support. Not SPI
21 * - master mode *NOT* supported
22 */
23
24#include <linux/module.h>
25#include <linux/slab.h>
26#include <sound/core.h>
27#include <sound/soc.h>
28#include <sound/tlv.h>
29#include <sound/initval.h>
30#include <sound/pcm_params.h>
31#include <sound/pcm.h>
32#include <linux/regmap.h>
33
34#include "cs42l51.h"
35
36enum master_slave_mode {
37 MODE_SLAVE,
38 MODE_SLAVE_AUTO,
39 MODE_MASTER,
40};
41
42struct cs42l51_private {
43 unsigned int mclk;
44 unsigned int audio_mode; /* The mode (I2S or left-justified) */
45 enum master_slave_mode func;
46};
47
48#define CS42L51_FORMATS ( \
49 SNDRV_PCM_FMTBIT_S16_LE | SNDRV_PCM_FMTBIT_S16_BE | \
50 SNDRV_PCM_FMTBIT_S18_3LE | SNDRV_PCM_FMTBIT_S18_3BE | \
51 SNDRV_PCM_FMTBIT_S20_3LE | SNDRV_PCM_FMTBIT_S20_3BE | \
52 SNDRV_PCM_FMTBIT_S24_LE | SNDRV_PCM_FMTBIT_S24_BE)
53
54static int cs42l51_get_chan_mix(struct snd_kcontrol *kcontrol,
55 struct snd_ctl_elem_value *ucontrol)
56{
57 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
58 unsigned long value = snd_soc_read(codec, CS42L51_PCM_MIXER)&3;
59
60 switch (value) {
61 default:
62 case 0:
63 ucontrol->value.enumerated.item[0] = 0;
64 break;
65 /* same value : (L+R)/2 and (R+L)/2 */
66 case 1:
67 case 2:
68 ucontrol->value.enumerated.item[0] = 1;
69 break;
70 case 3:
71 ucontrol->value.enumerated.item[0] = 2;
72 break;
73 }
74
75 return 0;
76}
77
78#define CHAN_MIX_NORMAL 0x00
79#define CHAN_MIX_BOTH 0x55
80#define CHAN_MIX_SWAP 0xFF
81
82static int cs42l51_set_chan_mix(struct snd_kcontrol *kcontrol,
83 struct snd_ctl_elem_value *ucontrol)
84{
85 struct snd_soc_codec *codec = snd_soc_kcontrol_codec(kcontrol);
86 unsigned char val;
87
88 switch (ucontrol->value.enumerated.item[0]) {
89 default:
90 case 0:
91 val = CHAN_MIX_NORMAL;
92 break;
93 case 1:
94 val = CHAN_MIX_BOTH;
95 break;
96 case 2:
97 val = CHAN_MIX_SWAP;
98 break;
99 }
100
101 snd_soc_write(codec, CS42L51_PCM_MIXER, val);
102
103 return 1;
104}
105
106static const DECLARE_TLV_DB_SCALE(adc_pcm_tlv, -5150, 50, 0);
107static const DECLARE_TLV_DB_SCALE(tone_tlv, -1050, 150, 0);
108
109static const DECLARE_TLV_DB_SCALE(aout_tlv, -10200, 50, 0);
110
111static const DECLARE_TLV_DB_SCALE(boost_tlv, 1600, 1600, 0);
112static const char *chan_mix[] = {
113 "L R",
114 "L+R",
115 "R L",
116};
117
118static SOC_ENUM_SINGLE_EXT_DECL(cs42l51_chan_mix, chan_mix);
119
120static const struct snd_kcontrol_new cs42l51_snd_controls[] = {
121 SOC_DOUBLE_R_SX_TLV("PCM Playback Volume",
122 CS42L51_PCMA_VOL, CS42L51_PCMB_VOL,
123 0, 0x19, 0x7F, adc_pcm_tlv),
124 SOC_DOUBLE_R("PCM Playback Switch",
125 CS42L51_PCMA_VOL, CS42L51_PCMB_VOL, 7, 1, 1),
126 SOC_DOUBLE_R_SX_TLV("Analog Playback Volume",
127 CS42L51_AOUTA_VOL, CS42L51_AOUTB_VOL,
128 0, 0x34, 0xE4, aout_tlv),
129 SOC_DOUBLE_R_SX_TLV("ADC Mixer Volume",
130 CS42L51_ADCA_VOL, CS42L51_ADCB_VOL,
131 0, 0x19, 0x7F, adc_pcm_tlv),
132 SOC_DOUBLE_R("ADC Mixer Switch",
133 CS42L51_ADCA_VOL, CS42L51_ADCB_VOL, 7, 1, 1),
134 SOC_SINGLE("Playback Deemphasis Switch", CS42L51_DAC_CTL, 3, 1, 0),
135 SOC_SINGLE("Auto-Mute Switch", CS42L51_DAC_CTL, 2, 1, 0),
136 SOC_SINGLE("Soft Ramp Switch", CS42L51_DAC_CTL, 1, 1, 0),
137 SOC_SINGLE("Zero Cross Switch", CS42L51_DAC_CTL, 0, 0, 0),
138 SOC_DOUBLE_TLV("Mic Boost Volume",
139 CS42L51_MIC_CTL, 0, 1, 1, 0, boost_tlv),
140 SOC_SINGLE_TLV("Bass Volume", CS42L51_TONE_CTL, 0, 0xf, 1, tone_tlv),
141 SOC_SINGLE_TLV("Treble Volume", CS42L51_TONE_CTL, 4, 0xf, 1, tone_tlv),
142 SOC_ENUM_EXT("PCM channel mixer",
143 cs42l51_chan_mix,
144 cs42l51_get_chan_mix, cs42l51_set_chan_mix),
145};
146
147/*
148 * to power down, one must:
149 * 1.) Enable the PDN bit
150 * 2.) enable power-down for the select channels
151 * 3.) disable the PDN bit.
152 */
153static int cs42l51_pdn_event(struct snd_soc_dapm_widget *w,
154 struct snd_kcontrol *kcontrol, int event)
155{
156 struct snd_soc_codec *codec = snd_soc_dapm_to_codec(w->dapm);
157
158 switch (event) {
159 case SND_SOC_DAPM_PRE_PMD:
160 snd_soc_update_bits(codec, CS42L51_POWER_CTL1,
161 CS42L51_POWER_CTL1_PDN,
162 CS42L51_POWER_CTL1_PDN);
163 break;
164 default:
165 case SND_SOC_DAPM_POST_PMD:
166 snd_soc_update_bits(codec, CS42L51_POWER_CTL1,
167 CS42L51_POWER_CTL1_PDN, 0);
168 break;
169 }
170
171 return 0;
172}
173
174static const char *cs42l51_dac_names[] = {"Direct PCM",
175 "DSP PCM", "ADC"};
176static SOC_ENUM_SINGLE_DECL(cs42l51_dac_mux_enum,
177 CS42L51_DAC_CTL, 6, cs42l51_dac_names);
178static const struct snd_kcontrol_new cs42l51_dac_mux_controls =
179 SOC_DAPM_ENUM("Route", cs42l51_dac_mux_enum);
180
181static const char *cs42l51_adcl_names[] = {"AIN1 Left", "AIN2 Left",
182 "MIC Left", "MIC+preamp Left"};
183static SOC_ENUM_SINGLE_DECL(cs42l51_adcl_mux_enum,
184 CS42L51_ADC_INPUT, 4, cs42l51_adcl_names);
185static const struct snd_kcontrol_new cs42l51_adcl_mux_controls =
186 SOC_DAPM_ENUM("Route", cs42l51_adcl_mux_enum);
187
188static const char *cs42l51_adcr_names[] = {"AIN1 Right", "AIN2 Right",
189 "MIC Right", "MIC+preamp Right"};
190static SOC_ENUM_SINGLE_DECL(cs42l51_adcr_mux_enum,
191 CS42L51_ADC_INPUT, 6, cs42l51_adcr_names);
192static const struct snd_kcontrol_new cs42l51_adcr_mux_controls =
193 SOC_DAPM_ENUM("Route", cs42l51_adcr_mux_enum);
194
195static const struct snd_soc_dapm_widget cs42l51_dapm_widgets[] = {
196 SND_SOC_DAPM_MICBIAS("Mic Bias", CS42L51_MIC_POWER_CTL, 1, 1),
197 SND_SOC_DAPM_PGA_E("Left PGA", CS42L51_POWER_CTL1, 3, 1, NULL, 0,
198 cs42l51_pdn_event, SND_SOC_DAPM_PRE_POST_PMD),
199 SND_SOC_DAPM_PGA_E("Right PGA", CS42L51_POWER_CTL1, 4, 1, NULL, 0,
200 cs42l51_pdn_event, SND_SOC_DAPM_PRE_POST_PMD),
201 SND_SOC_DAPM_ADC_E("Left ADC", "Left HiFi Capture",
202 CS42L51_POWER_CTL1, 1, 1,
203 cs42l51_pdn_event, SND_SOC_DAPM_PRE_POST_PMD),
204 SND_SOC_DAPM_ADC_E("Right ADC", "Right HiFi Capture",
205 CS42L51_POWER_CTL1, 2, 1,
206 cs42l51_pdn_event, SND_SOC_DAPM_PRE_POST_PMD),
207 SND_SOC_DAPM_DAC_E("Left DAC", "Left HiFi Playback",
208 CS42L51_POWER_CTL1, 5, 1,
209 cs42l51_pdn_event, SND_SOC_DAPM_PRE_POST_PMD),
210 SND_SOC_DAPM_DAC_E("Right DAC", "Right HiFi Playback",
211 CS42L51_POWER_CTL1, 6, 1,
212 cs42l51_pdn_event, SND_SOC_DAPM_PRE_POST_PMD),
213
214 /* analog/mic */
215 SND_SOC_DAPM_INPUT("AIN1L"),
216 SND_SOC_DAPM_INPUT("AIN1R"),
217 SND_SOC_DAPM_INPUT("AIN2L"),
218 SND_SOC_DAPM_INPUT("AIN2R"),
219 SND_SOC_DAPM_INPUT("MICL"),
220 SND_SOC_DAPM_INPUT("MICR"),
221
222 SND_SOC_DAPM_MIXER("Mic Preamp Left",
223 CS42L51_MIC_POWER_CTL, 2, 1, NULL, 0),
224 SND_SOC_DAPM_MIXER("Mic Preamp Right",
225 CS42L51_MIC_POWER_CTL, 3, 1, NULL, 0),
226
227 /* HP */
228 SND_SOC_DAPM_OUTPUT("HPL"),
229 SND_SOC_DAPM_OUTPUT("HPR"),
230
231 /* mux */
232 SND_SOC_DAPM_MUX("DAC Mux", SND_SOC_NOPM, 0, 0,
233 &cs42l51_dac_mux_controls),
234 SND_SOC_DAPM_MUX("PGA-ADC Mux Left", SND_SOC_NOPM, 0, 0,
235 &cs42l51_adcl_mux_controls),
236 SND_SOC_DAPM_MUX("PGA-ADC Mux Right", SND_SOC_NOPM, 0, 0,
237 &cs42l51_adcr_mux_controls),
238};
239
240static const struct snd_soc_dapm_route cs42l51_routes[] = {
241 {"HPL", NULL, "Left DAC"},
242 {"HPR", NULL, "Right DAC"},
243
244 {"Left ADC", NULL, "Left PGA"},
245 {"Right ADC", NULL, "Right PGA"},
246
247 {"Mic Preamp Left", NULL, "MICL"},
248 {"Mic Preamp Right", NULL, "MICR"},
249
250 {"PGA-ADC Mux Left", "AIN1 Left", "AIN1L" },
251 {"PGA-ADC Mux Left", "AIN2 Left", "AIN2L" },
252 {"PGA-ADC Mux Left", "MIC Left", "MICL" },
253 {"PGA-ADC Mux Left", "MIC+preamp Left", "Mic Preamp Left" },
254 {"PGA-ADC Mux Right", "AIN1 Right", "AIN1R" },
255 {"PGA-ADC Mux Right", "AIN2 Right", "AIN2R" },
256 {"PGA-ADC Mux Right", "MIC Right", "MICR" },
257 {"PGA-ADC Mux Right", "MIC+preamp Right", "Mic Preamp Right" },
258
259 {"Left PGA", NULL, "PGA-ADC Mux Left"},
260 {"Right PGA", NULL, "PGA-ADC Mux Right"},
261};
262
263static int cs42l51_set_dai_fmt(struct snd_soc_dai *codec_dai,
264 unsigned int format)
265{
266 struct snd_soc_codec *codec = codec_dai->codec;
267 struct cs42l51_private *cs42l51 = snd_soc_codec_get_drvdata(codec);
268
269 switch (format & SND_SOC_DAIFMT_FORMAT_MASK) {
270 case SND_SOC_DAIFMT_I2S:
271 case SND_SOC_DAIFMT_LEFT_J:
272 case SND_SOC_DAIFMT_RIGHT_J:
273 cs42l51->audio_mode = format & SND_SOC_DAIFMT_FORMAT_MASK;
274 break;
275 default:
276 dev_err(codec->dev, "invalid DAI format\n");
277 return -EINVAL;
278 }
279
280 switch (format & SND_SOC_DAIFMT_MASTER_MASK) {
281 case SND_SOC_DAIFMT_CBM_CFM:
282 cs42l51->func = MODE_MASTER;
283 break;
284 case SND_SOC_DAIFMT_CBS_CFS:
285 cs42l51->func = MODE_SLAVE_AUTO;
286 break;
287 default:
288 dev_err(codec->dev, "Unknown master/slave configuration\n");
289 return -EINVAL;
290 }
291
292 return 0;
293}
294
295struct cs42l51_ratios {
296 unsigned int ratio;
297 unsigned char speed_mode;
298 unsigned char mclk;
299};
300
301static struct cs42l51_ratios slave_ratios[] = {
302 { 512, CS42L51_QSM_MODE, 0 }, { 768, CS42L51_QSM_MODE, 0 },
303 { 1024, CS42L51_QSM_MODE, 0 }, { 1536, CS42L51_QSM_MODE, 0 },
304 { 2048, CS42L51_QSM_MODE, 0 }, { 3072, CS42L51_QSM_MODE, 0 },
305 { 256, CS42L51_HSM_MODE, 0 }, { 384, CS42L51_HSM_MODE, 0 },
306 { 512, CS42L51_HSM_MODE, 0 }, { 768, CS42L51_HSM_MODE, 0 },
307 { 1024, CS42L51_HSM_MODE, 0 }, { 1536, CS42L51_HSM_MODE, 0 },
308 { 128, CS42L51_SSM_MODE, 0 }, { 192, CS42L51_SSM_MODE, 0 },
309 { 256, CS42L51_SSM_MODE, 0 }, { 384, CS42L51_SSM_MODE, 0 },
310 { 512, CS42L51_SSM_MODE, 0 }, { 768, CS42L51_SSM_MODE, 0 },
311 { 128, CS42L51_DSM_MODE, 0 }, { 192, CS42L51_DSM_MODE, 0 },
312 { 256, CS42L51_DSM_MODE, 0 }, { 384, CS42L51_DSM_MODE, 0 },
313};
314
315static struct cs42l51_ratios slave_auto_ratios[] = {
316 { 1024, CS42L51_QSM_MODE, 0 }, { 1536, CS42L51_QSM_MODE, 0 },
317 { 2048, CS42L51_QSM_MODE, 1 }, { 3072, CS42L51_QSM_MODE, 1 },
318 { 512, CS42L51_HSM_MODE, 0 }, { 768, CS42L51_HSM_MODE, 0 },
319 { 1024, CS42L51_HSM_MODE, 1 }, { 1536, CS42L51_HSM_MODE, 1 },
320 { 256, CS42L51_SSM_MODE, 0 }, { 384, CS42L51_SSM_MODE, 0 },
321 { 512, CS42L51_SSM_MODE, 1 }, { 768, CS42L51_SSM_MODE, 1 },
322 { 128, CS42L51_DSM_MODE, 0 }, { 192, CS42L51_DSM_MODE, 0 },
323 { 256, CS42L51_DSM_MODE, 1 }, { 384, CS42L51_DSM_MODE, 1 },
324};
325
326static int cs42l51_set_dai_sysclk(struct snd_soc_dai *codec_dai,
327 int clk_id, unsigned int freq, int dir)
328{
329 struct snd_soc_codec *codec = codec_dai->codec;
330 struct cs42l51_private *cs42l51 = snd_soc_codec_get_drvdata(codec);
331
332 cs42l51->mclk = freq;
333 return 0;
334}
335
336static int cs42l51_hw_params(struct snd_pcm_substream *substream,
337 struct snd_pcm_hw_params *params,
338 struct snd_soc_dai *dai)
339{
340 struct snd_soc_codec *codec = dai->codec;
341 struct cs42l51_private *cs42l51 = snd_soc_codec_get_drvdata(codec);
342 int ret;
343 unsigned int i;
344 unsigned int rate;
345 unsigned int ratio;
346 struct cs42l51_ratios *ratios = NULL;
347 int nr_ratios = 0;
348 int intf_ctl, power_ctl, fmt;
349
350 switch (cs42l51->func) {
351 case MODE_MASTER:
352 return -EINVAL;
353 case MODE_SLAVE:
354 ratios = slave_ratios;
355 nr_ratios = ARRAY_SIZE(slave_ratios);
356 break;
357 case MODE_SLAVE_AUTO:
358 ratios = slave_auto_ratios;
359 nr_ratios = ARRAY_SIZE(slave_auto_ratios);
360 break;
361 }
362
363 /* Figure out which MCLK/LRCK ratio to use */
364 rate = params_rate(params); /* Sampling rate, in Hz */
365 ratio = cs42l51->mclk / rate; /* MCLK/LRCK ratio */
366 for (i = 0; i < nr_ratios; i++) {
367 if (ratios[i].ratio == ratio)
368 break;
369 }
370
371 if (i == nr_ratios) {
372 /* We did not find a matching ratio */
373 dev_err(codec->dev, "could not find matching ratio\n");
374 return -EINVAL;
375 }
376
377 intf_ctl = snd_soc_read(codec, CS42L51_INTF_CTL);
378 power_ctl = snd_soc_read(codec, CS42L51_MIC_POWER_CTL);
379
380 intf_ctl &= ~(CS42L51_INTF_CTL_MASTER | CS42L51_INTF_CTL_ADC_I2S
381 | CS42L51_INTF_CTL_DAC_FORMAT(7));
382 power_ctl &= ~(CS42L51_MIC_POWER_CTL_SPEED(3)
383 | CS42L51_MIC_POWER_CTL_MCLK_DIV2);
384
385 switch (cs42l51->func) {
386 case MODE_MASTER:
387 intf_ctl |= CS42L51_INTF_CTL_MASTER;
388 power_ctl |= CS42L51_MIC_POWER_CTL_SPEED(ratios[i].speed_mode);
389 break;
390 case MODE_SLAVE:
391 power_ctl |= CS42L51_MIC_POWER_CTL_SPEED(ratios[i].speed_mode);
392 break;
393 case MODE_SLAVE_AUTO:
394 power_ctl |= CS42L51_MIC_POWER_CTL_AUTO;
395 break;
396 }
397
398 switch (cs42l51->audio_mode) {
399 case SND_SOC_DAIFMT_I2S:
400 intf_ctl |= CS42L51_INTF_CTL_ADC_I2S;
401 intf_ctl |= CS42L51_INTF_CTL_DAC_FORMAT(CS42L51_DAC_DIF_I2S);
402 break;
403 case SND_SOC_DAIFMT_LEFT_J:
404 intf_ctl |= CS42L51_INTF_CTL_DAC_FORMAT(CS42L51_DAC_DIF_LJ24);
405 break;
406 case SND_SOC_DAIFMT_RIGHT_J:
407 switch (params_width(params)) {
408 case 16:
409 fmt = CS42L51_DAC_DIF_RJ16;
410 break;
411 case 18:
412 fmt = CS42L51_DAC_DIF_RJ18;
413 break;
414 case 20:
415 fmt = CS42L51_DAC_DIF_RJ20;
416 break;
417 case 24:
418 fmt = CS42L51_DAC_DIF_RJ24;
419 break;
420 default:
421 dev_err(codec->dev, "unknown format\n");
422 return -EINVAL;
423 }
424 intf_ctl |= CS42L51_INTF_CTL_DAC_FORMAT(fmt);
425 break;
426 default:
427 dev_err(codec->dev, "unknown format\n");
428 return -EINVAL;
429 }
430
431 if (ratios[i].mclk)
432 power_ctl |= CS42L51_MIC_POWER_CTL_MCLK_DIV2;
433
434 ret = snd_soc_write(codec, CS42L51_INTF_CTL, intf_ctl);
435 if (ret < 0)
436 return ret;
437
438 ret = snd_soc_write(codec, CS42L51_MIC_POWER_CTL, power_ctl);
439 if (ret < 0)
440 return ret;
441
442 return 0;
443}
444
445static int cs42l51_dai_mute(struct snd_soc_dai *dai, int mute)
446{
447 struct snd_soc_codec *codec = dai->codec;
448 int reg;
449 int mask = CS42L51_DAC_OUT_CTL_DACA_MUTE|CS42L51_DAC_OUT_CTL_DACB_MUTE;
450
451 reg = snd_soc_read(codec, CS42L51_DAC_OUT_CTL);
452
453 if (mute)
454 reg |= mask;
455 else
456 reg &= ~mask;
457
458 return snd_soc_write(codec, CS42L51_DAC_OUT_CTL, reg);
459}
460
461static const struct snd_soc_dai_ops cs42l51_dai_ops = {
462 .hw_params = cs42l51_hw_params,
463 .set_sysclk = cs42l51_set_dai_sysclk,
464 .set_fmt = cs42l51_set_dai_fmt,
465 .digital_mute = cs42l51_dai_mute,
466};
467
468static struct snd_soc_dai_driver cs42l51_dai = {
469 .name = "cs42l51-hifi",
470 .playback = {
471 .stream_name = "Playback",
472 .channels_min = 1,
473 .channels_max = 2,
474 .rates = SNDRV_PCM_RATE_8000_96000,
475 .formats = CS42L51_FORMATS,
476 },
477 .capture = {
478 .stream_name = "Capture",
479 .channels_min = 1,
480 .channels_max = 2,
481 .rates = SNDRV_PCM_RATE_8000_96000,
482 .formats = CS42L51_FORMATS,
483 },
484 .ops = &cs42l51_dai_ops,
485};
486
487static int cs42l51_codec_probe(struct snd_soc_codec *codec)
488{
489 int ret, reg;
490
491 /*
492 * DAC configuration
493 * - Use signal processor
494 * - auto mute
495 * - vol changes immediate
496 * - no de-emphasize
497 */
498 reg = CS42L51_DAC_CTL_DATA_SEL(1)
499 | CS42L51_DAC_CTL_AMUTE | CS42L51_DAC_CTL_DACSZ(0);
500 ret = snd_soc_write(codec, CS42L51_DAC_CTL, reg);
501 if (ret < 0)
502 return ret;
503
504 return 0;
505}
506
507static struct snd_soc_codec_driver soc_codec_device_cs42l51 = {
508 .probe = cs42l51_codec_probe,
509
510 .component_driver = {
511 .controls = cs42l51_snd_controls,
512 .num_controls = ARRAY_SIZE(cs42l51_snd_controls),
513 .dapm_widgets = cs42l51_dapm_widgets,
514 .num_dapm_widgets = ARRAY_SIZE(cs42l51_dapm_widgets),
515 .dapm_routes = cs42l51_routes,
516 .num_dapm_routes = ARRAY_SIZE(cs42l51_routes),
517 },
518};
519
520const struct regmap_config cs42l51_regmap = {
521 .max_register = CS42L51_CHARGE_FREQ,
522 .cache_type = REGCACHE_RBTREE,
523};
524EXPORT_SYMBOL_GPL(cs42l51_regmap);
525
526int cs42l51_probe(struct device *dev, struct regmap *regmap)
527{
528 struct cs42l51_private *cs42l51;
529 unsigned int val;
530 int ret;
531
532 if (IS_ERR(regmap))
533 return PTR_ERR(regmap);
534
535 cs42l51 = devm_kzalloc(dev, sizeof(struct cs42l51_private),
536 GFP_KERNEL);
537 if (!cs42l51)
538 return -ENOMEM;
539
540 dev_set_drvdata(dev, cs42l51);
541
542 /* Verify that we have a CS42L51 */
543 ret = regmap_read(regmap, CS42L51_CHIP_REV_ID, &val);
544 if (ret < 0) {
545 dev_err(dev, "failed to read I2C\n");
546 goto error;
547 }
548
549 if ((val != CS42L51_MK_CHIP_REV(CS42L51_CHIP_ID, CS42L51_CHIP_REV_A)) &&
550 (val != CS42L51_MK_CHIP_REV(CS42L51_CHIP_ID, CS42L51_CHIP_REV_B))) {
551 dev_err(dev, "Invalid chip id: %x\n", val);
552 ret = -ENODEV;
553 goto error;
554 }
555 dev_info(dev, "Cirrus Logic CS42L51, Revision: %02X\n",
556 val & CS42L51_CHIP_REV_MASK);
557
558 ret = snd_soc_register_codec(dev,
559 &soc_codec_device_cs42l51, &cs42l51_dai, 1);
560error:
561 return ret;
562}
563EXPORT_SYMBOL_GPL(cs42l51_probe);
564
565const struct of_device_id cs42l51_of_match[] = {
566 { .compatible = "cirrus,cs42l51", },
567 { }
568};
569MODULE_DEVICE_TABLE(of, cs42l51_of_match);
570EXPORT_SYMBOL_GPL(cs42l51_of_match);
571
572MODULE_AUTHOR("Arnaud Patard <[email protected]>");
573MODULE_DESCRIPTION("Cirrus Logic CS42L51 ALSA SoC Codec Driver");
574MODULE_LICENSE("GPL");
575
lxr.linux.no kindly hosted by Redpill Linpro AS, provider of Linux consulting and operations services since 1995.
|
__label__pos
| 0.990813 |
Worksheet on Change of Subject of Formula | Free Changing the Subject of a Formula Worksheet with Answers pdf
If you are finding an easy and simple procedure to change the subject of a formula or equation then this is the correct page. Here, we have given a detailed explanation about how to change the subject of an equation in a printable worksheet. This free downloadable activity sheet on the Change of Subject of an Expression helps kids to practice in a fun learning way.
So, make use of this provided Changing the Subject of a Formula Worksheet with Answers pdf & try to solve basic to complex problems about the subject of the formula.
Also Check:
Free & Printable Worksheet on Changing the Subject of the Formula Pdf
I. Change the subject as bolded in the following formulas:
(i) V = u + ft, make u as subject
(ii) L = 2 (a + b), make b as subject
(iii) X = my + c, make c as subject
Solution:
(i) Given that V = u + ft
Subtract ft on both sides
V – ft = u + ft – ft
V – ft = u
Hence, the subject of the formula u = V – ft.
(ii) Given that L = 2 (a + b)
Divide 2 on both sides
\(\frac { L }{ 2 } \) = \(\frac { 2(a+b) }{ 2 } \)
a + b = \(\frac { L }{ 2 } \)
Subtract a on both sides
a + b – a = \(\frac { L }{ 2 } \) – a
b = \(\frac { L }{ 2 } \) – a
Hence, the subject of the formula b = \(\frac { L }{ 2 } \) – a.
(iii) Given that X = my + c
Subtract my on both sides
x – my = my + c – my
x – my = c
Hence, the subject of the formula c = x – my.
II. What is the subject in each of the following formulas or equations? Make the subject as shown in the question.
(i) If 3ay + 2b² = 3by + 2a², write the formula for ‘y’ in terms of a, b in the simplest form.
(ii) In the expression S= 2(lb + bh + lh) what is the subject. Write the formula with ‘h’ as the subject.
Solution:
(i) Given expression is 3ay + 2b² = 3by + 2a²
After rearranging the given expression, we get the formula for y in terms of a, b;
y = \(\frac { 2 }{ 3 } \)(a + b)
(ii) Given that S= 2(lb + bh + lh)
Here, S is the subject but now we have to change the subject of a formula with h;
h = s−\(\frac { 2lb }{ 2(b+l) } \)
III. Make h the subject of the formula r = h(a-b). Find h with the help of known values r = 100, a=5, and b=3.
Solution:
Given r = h(a-b)
Divide (a-b) on both sides
\(\frac { r }{ a-b } \) = \(\frac { h(a-b) }{ a-b } \)
h = \(\frac { r }{ a-b } \)
Now, substitute the given values r = 100, a=5 and b=3 in the rearranged formula;
h = \(\frac { 100 }{ 5-3 } \)
h = \(\frac { 100 }{ 2 } \)
h = 50.
IV. Change x as the subject of the formula \(\frac { x }{ a } \) + \(\frac { y }{ b } \) = 1. Find x, when a=3, b=6, and y=9.
Solution:
Given that \(\frac { x }{ a } \) + \(\frac { y }{ b } \) = 1
x/a = 1 – \(\frac { y }{ b } \)
x = a(1- \(\frac { y }{ b } \))
x = a – \(\frac { a }{ b } \) x y, Here is the x formula.
Now, find the x value by substituting a=6, b=3, and y=9 in the formula;
x = 6 – \(\frac { 6 }{ 3 } \) x 9 = 6 – 2 x 9 = 6 – 18 = -12
V. In the formula x = y(1+zt), x is the subject of the formula. But find z as the subject when x=150, y=100, and t=2.
Solution:
Given formula is x = y(1+zt)
x = y + yzt
subtract y on both sides
x – y = yzt
z = x – \(\frac { y }{ yt } \), hence z is the subject of a formula.
Now, substitute the given values in the rearrange formula;
z = 150 – \(\frac {100}{ 100 } \) x 2 = \(\frac { 50 }{ 200 } \)
= \(\frac { 1 }{ 4 } \)
z = \(\frac { 1 }{ 4 } \).
VI. The formula PV = C where p is pressure and v is the volume of a gas and c is constant. If p = 2 when v = \(\frac { 5 }{ 2 } \), find the value of p when v = 4.
Solution:
Given that when p=2, v=\(\frac { 5 }{ 2 } \)
PV = C
2 x \(\frac { 5 }{ 2 } \) = C
C= 5
If v = 4, then
PV = C
P(4) = 5
P = \(\frac { 5 }{ 4 } \)
Leave a Comment
|
__label__pos
| 1 |
Pen Settings
HTML
CSS
CSS Base
Vendor Prefixing
Add External Stylesheets/Pens
Any URL's added here will be added as <link>s in order, and before the CSS in the editor. If you link to another Pen, it will include the CSS from that Pen. If the preprocessor matches, it will attempt to combine them before processing.
+ add another resource
JavaScript
Babel includes JSX processing.
Add External Scripts/Pens
Any URL's added here will be added as <script>s in order, and run before the JavaScript in the editor. You can use the URL of any other Pen and it will include the JavaScript from that Pen.
+ add another resource
Packages
Add Packages
Search for and use JavaScript packages from npm here. By selecting a package, an import statement will be added to the top of the JavaScript editor for this package.
Behavior
Save Automatically?
If active, Pens will autosave every 30 seconds after being saved once.
Auto-Updating Preview
If enabled, the preview panel updates automatically as you code. If disabled, use the "Run" button to update.
Format on Save
If enabled, your code will be formatted when you actively save your Pen. Note: your code becomes un-folded during formatting.
Editor Settings
Code Indentation
Want to change your Syntax Highlighting theme, Fonts and more?
Visit your global Editor Settings.
HTML
<main>
<h1>Basic Sortable Table</h1>
<fieldset>
<legend>Live region</legend>
<button id="off" type="button" class="liveliness toggleLiveRegion" aria-pressed="true" onclick="toggleLiveRegion(this.id);">off</button>
<button id="polite" type="button" class="liveliness toggleLiveRegion" aria-pressed="false" onclick="toggleLiveRegion(this.id);">polite</button>
<button id="assertive" type="button" class="liveliness toggleLiveRegion" aria-pressed="false" onclick="toggleLiveRegion(this.id);">assertive</button>
<p>
Live region output:<br>
<code id="FakeLiveRegion">empty</code>
</p>
</fieldset>
<div role="region" aria-labelledby="Cap" tabindex="0">
<table>
<caption id="Cap">Books I May or May Not Have Read</caption>
<thead>
<tr>
<th id="ColAuthor">
<button type="button" id="ColAuthorSortButton" onclick="toggleSort(this.id,'ColAuthor','1','SortNote')">
<span>Author</span>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort asc" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-asc"></use>
</svg>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort des" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-des"></use>
</svg>
</button>
</th>
<th id="ColTitle">
<button type="button" id="ColTitleSortButton" onclick="toggleSort(this.id,'ColTitle','2','SortNote')">
<span>Title</span>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort asc" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-asc"></use>
</svg>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort des" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-des"></use>
</svg>
</button>
</th>
<th id="ColYear">
<button type="button" id="ColYearSortButton" onclick="toggleSort(this.id,'ColYear','3','SortNote')">
<span>Year</span>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort asc" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-asc"></use>
</svg>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort des" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-des"></use>
</svg>
</button>
</th>
<th id="ColISBN13">
<button type="button" id="ColISBN13SortButton" onclick="toggleSort(this.id,'ColISBN13','4','SortNote')">
<span>ISBN-13</span>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort asc" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-asc"></use>
</svg>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort des" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-des"></use>
</svg>
</button>
</th>
<th id="ColISBN10">
<button type="button" id="ColISBN10SortButton" onclick="toggleSort(this.id,'ColISBN10','5','SortNote')">
<span>ISBN-10</span>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort asc" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-asc"></use>
</svg>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort des" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-des"></use>
</svg>
</button>
</th>
<th id="ColMood">
<button type="button" id="ColMoodSortButton" onclick="toggleSort(this.id,'ColMood','6','SortNote')">
<span>Mood</span>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort asc" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-asc"></use>
</svg>
<svg viewBox="0 0 425 233.7" focusable="false" class="sort des" aria-hidden="true">
<use xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="#icon-sort-des"></use>
</svg>
</button>
</th>
</tr>
</thead>
<tbody>
<tr>
<td>Murasaki Shikibu<br>(<span lang="ja">紫 式部</span>, Lady Murasaki)</td>
<td>The Tale of Genji<br>(<span lang="ja">源氏物語</span>, Genji monogatari)</td>
<td>1021</td>
<td>9780142437148</td>
<td>014243714X</td>
<td>😣</td>
</tr>
<tr>
<td lang="es">Miguel De Cervantes</td>
<td>The Ingenious Gentleman Don Quixote of La Mancha</td>
<td>1605</td>
<td>9783125798502</td>
<td>3125798507</td>
<td>😢</td>
</tr>
<tr>
<td lang="fr">Gabrielle-Suzanne Barbot de Villeneuve</td>
<td lang="fr">La Belle et la Bête</td>
<td>1740</td>
<td>9781910880067</td>
<td>191088006X</td>
<td>😂</td>
</tr>
<tr>
<td>Sir Isaac Newton</td>
<td>The Method of Fluxions and Infinite Series: With Its Application to the Geometry of Curve-lines</td>
<td>1763</td>
<td>9781330454862</td>
<td>1330454863</td>
<td>🤔</td>
</tr>
<tr>
<td>Mary Shelley</td>
<td>Frankenstein; or, The Modern Prometheus</td>
<td>1818</td>
<td>9781530278442</td>
<td>1530278449</td>
<td>😨</td>
</tr>
<tr>
<td>Herman Melville</td>
<td>Moby-Dick; or, The Whale</td>
<td>1851</td>
<td>9781530697908</td>
<td>1530697905</td>
<td>😐</td>
</tr>
<tr>
<td>Emma Dorothy Eliza Nevitte Southworth</td>
<td>The Hidden Hand</td>
<td>1888</td>
<td>9780813512969</td>
<td>0813512964</td>
<td>🤐</td>
</tr>
<tr>
<td>F. Scott Fitzgerald</td>
<td>The Great Gatsby</td>
<td>1925</td>
<td>9780743273565</td>
<td>0743273567</td>
<td>😒</td>
</tr>
<tr>
<td>George Orwell</td>
<td>Nineteen Eighty-Four</td>
<td>1948</td>
<td>9780451524935</td>
<td>0451524934</td>
<td>😠</td>
</tr>
<tr>
<td>Nnedi Okorafor</td>
<td>Who Fears Death</td>
<td>2010</td>
<td>9780756406691</td>
<td>0008288747</td>
<td>😔</td>
</tr>
</tbody>
</table>
</div>
<div id="SortNote" class="visually-hidden"></div>
<p>
Used in the post <a href="https://adrianroselli.com/2021/04/sortable-table-columns.html"><cite>Sortable Table Columns</cite></a>.
</p>
</main>
<svg version="1.1" xmlns="http://www.w3.org/2000/svg" id="SVGsprites">
<style>
.l { fill: none; stroke-width: 30; stroke-miterlimit: 10; }
.t { stroke: none; }
</style>
<defs>
<g id="icon-sort" aria-labeledby="title-sort" aria-describedby="desc-sort" role="image">
<title id="title-sort">Sort</title>
<desc id="desc-sort"></desc>
<path class="t" d="M20.4 233.7L212.5 41.6l192.1 192.1z"/>
<path class="l" d="M414.4 223.1L212.5 21.2 10.6 223.1"/>
<path class="t" d="M404.6 306L212.5 498.1 20.4 306z"/>
<path class="l" d="M10.6 316.6l201.9 201.9 201.9-201.9"/>
</g>
<g id="icon-sort-asc" aria-labeledby="title-sort-asc" aria-describedby="desc-sort-asc" role="image">
<title id="title-sort-asc">Sort Ascending</title>
<desc id="desc-sort-asc"></desc>
<path class="t" d="M20.4 233.7L212.5 41.6l192.1 192.1z"/>
<path class="l" d="M414.4 223.1L212.5 21.2 10.6 223.1"/>
</g>
<g id="icon-sort-des" aria-labeledby="title-sort-des" aria-describedby="desc-sort-des" role="image">
<title id="title-sort-des">Sort Descending</title>
<desc id="desc-sort-des"></desc>
<path class="t" d="M404.6 0L212.5 192.1 20.4 0z"/>
<path class="l" d="M10.6 10.6l201.9 201.9L414.4 10.6"/>
</g>
</defs>
</svg>
!
CSS
:root {
--text-color: #333;
--bg-color: #eee;
--col-header-color: #333;
--col-header-hover-color: #fff;
}
@media screen and (-ms-high-contrast: active),
screen and (forced-colors: active) {
:root {
--col-header-color: ButtonText;
--col-header-hover-color: WindowText;
--col-header-hover-color: CanvasText;
}
}
body {
font-family: "Segoe UI", -apple-system, BlinkMacSystemFont, Roboto,
Oxygen-Sans, Ubuntu, Cantarell, "Helvetica Neue", sans-serif;
line-height: 1.4;
color: var(--text-color);
background-color: var(--bg-color);
}
#SVGsprites {
display: none;
}
main {
margin: 0 0 2em 0;
}
label {
margin-right: 2em;
}
table {
margin: 1em 0 0 0;
border-collapse: collapse;
border: 0.1em solid rgba(0, 0, 0, 0.1);
}
caption {
text-align: left;
font-style: italic;
padding: 0.25em 0.5em 0.5em 0.5em;
}
th,
td {
padding: 0.25em 0.5em 0.25em 1em;
vertical-align: text-top;
text-align: left;
text-indent: -0.5em;
}
td.sorted {
background-color: rgba(255, 255, 0, 0.15);
}
th {
vertical-align: bottom;
background-color: rgba(0, 0, 0, 0.1);
}
tr:nth-child(even) {
background-color: rgba(0, 0, 0, 0.05);
}
tr:nth-child(odd) {
background-color: rgba(255, 255, 255, 0.05);
}
td:nth-of-type(2) {
font-style: italic;
}
th:nth-of-type(3),
td:nth-of-type(3) {
text-align: right;
}
th:nth-of-type(6),
td:nth-of-type(6) {
text-align: center;
min-width: 4em;
}
th {
padding: 0;
text-indent: 0;
}
th > button {
background: transparent;
border: 1px solid transparent;
color: inherit;
font: inherit;
text-align: left;
cursor: pointer;
padding: 0.25em 0.5em 0.25em 1em;
white-space: nowrap;
width: 100%;
min-width: 4.5em;
display: grid;
grid-template-columns: minmax(2em, max-content) .65em auto;
grid-template-areas: "t a x" "t d x";
}
th > button > span {
grid-area: t;
padding-right: .5em;
}
th > button > .asc {
grid-area: a;
align-self: center;
}
th > button > .des {
grid-area: d;
align-self: center;
}
th > button::after {
content: "";
grid-area: x;
}
th > button:focus,
th > button:hover {
color: #fff;
color: var(--col-header-hover-color);
background-color: #666;
outline: none;
}
th > button svg.sort {
fill: transparent;
stroke: var(--col-header-color);
max-width: .65em;
max-height: 1.2em;
}
[aria-sort="ascending"] > button svg.asc {
stroke: var(--col-header-color);
fill: var(--col-header-color);
}
[aria-sort="descending"] > button svg.des {
stroke: var(--col-header-color);
fill: var(--col-header-color);
}
th:focus > button svg.sort, th:hover > button svg.sort, th:focus-within > button svg.sort {
stroke: var(--col-header-hover-color);
}
[aria-sort="ascending"] > button:focus svg.asc,
[aria-sort="ascending"] > button:hover svg.asc,
[aria-sort="descending"] > button:focus svg.des,
[aria-sort="descending"] > button:hover svg.des {
stroke: var(--col-header-hover-color);
fill: var(--col-header-hover-color);
}
/* Method to visually hide something but still */
/* make it available to screen readers */
.visually-hidden {
position: absolute;
top: auto;
overflow: hidden;
clip: rect(1px, 1px, 1px, 1px);
width: 1px;
height: 1px;
white-space: nowrap;
}
/* Scrolling, responsive table container */
/* https://adrianroselli.com/2020/11/under-engineered-responsive-tables.html */
[role="region"][aria-labelledby][tabindex] {
overflow: auto;
}
[role="region"][aria-labelledby][tabindex]:focus {
outline: .1em solid rgba(0,0,0,.1);
}
/* Scrolling Visual Cue */
[role="region"][aria-labelledby][tabindex] {
background: linear-gradient(to right, var(--bg-color) 30%, rgba(255, 255, 255, 0)),
linear-gradient(to right, rgba(255, 255, 255, 0), var(--bg-color) 70%) 0 100%,
radial-gradient(
farthest-side at 0% 50%,
rgba(0, 0, 0, 0.2),
rgba(0, 0, 0, 0)
),
radial-gradient(
farthest-side at 100% 50%,
rgba(0, 0, 0, 0.2),
rgba(0, 0, 0, 0)
)
0 100%;
background-repeat: no-repeat;
background-color: var(--bg-color);
background-size: 40px 100%, 40px 100%, 14px 100%, 14px 100%;
background-position: 0 0, 100%, 0 0, 100%;
background-attachment: local, local, scroll, scroll;
}
/* Live region toggle buttons*/
.liveliness {
border: .1em solid #11c;
border-radius: .25em;
padding: .25em .5em;
color: #fff;
background: #11c;
display: inline-block;
width: auto;
text-shadow: 1px 1px 1px #000;
font: inherit;
font-size: 1rem;
margin: .2em .1em;
}
.liveliness:focus, .liveliness:hover {
color: #11c;
background: #eee;
text-shadow: none;
outline: none;
}
.liveliness[aria-pressed]::before {
content: "⊘ ";
}
.liveliness[aria-pressed="true"]::before {
content: "✔ ";
}
.liveliness[aria-pressed="true"] {
border-color: #000;
}
!
JS
function toggleSort(btnID, colID, colNum, regionID) {
var theButton = document.getElementById(btnID);
var theColumn = document.getElementById(colID);
var liveRegion = document.getElementById(regionID);
var sortedTDs = document.querySelectorAll(
"td:nth-child(" + colNum + "), *[role=cell]:nth-child(" + colNum + ")"
);
var currSort = theColumn.getAttribute("aria-sort");
if (currSort == "descending") {
clearSorts();
theColumn.setAttribute("aria-sort", "ascending");
liveRegion.innerHTML = "sorted up";
} else {
clearSorts();
theColumn.setAttribute("aria-sort", "descending");
liveRegion.innerHTML = "sorted down";
}
for (var i = 0; i < sortedTDs.length; i++) {
sortedTDs[i].classList.add("sorted");
}
setTimeout(function () {
liveRegion.innerHTML = "";
}, 1000);
// for the fake live region to see the output
document.getElementById("FakeLiveRegion").innerHTML = liveRegion.innerHTML;
}
function clearSorts() {
var thSort = document.querySelectorAll("*[aria-sort]");
var tdSort = document.querySelectorAll(".sorted");
var thBtn = document.querySelectorAll(
"th > button, *[role=columnheader] > button"
);
for (var i = 0; i < thSort.length; i++) {
thSort[i].removeAttribute("aria-sort");
}
for (var i = 0; i < tdSort.length; i++) {
tdSort[i].classList.remove("sorted");
}
}
// Manage live region for demo
function flipLiveRegion(val) {
var liveRegion = document.getElementById('SortNote');
if (val == 'off') {
liveRegion.removeAttribute('aria-live');
} else {
liveRegion.setAttribute('aria-live', val);
}
}
function toggleLiveRegion(btnID) {
var deToggle = document.querySelectorAll("button.toggleLiveRegion[aria-pressed]");
for (var i = 0; i < deToggle.length; i++) {
deToggle[i].setAttribute("aria-pressed", "false");
}
var theButton = document.getElementById(btnID);
theButton.setAttribute("aria-pressed", "true");
flipLiveRegion(btnID);
}
!
999px
Console
|
__label__pos
| 0.859876 |
Check if any element of a list is in a matrix?
I have a small list:
list2 = ['hi', 'ma', 'ja']
and I have a matrix too. for example:
matrix2 = ([['high','h ight','hi ght','h i g ht'],
['man','ma n','ma th','mat h'],
['ja cket','j a ck et','jack et','ja m']
['ma nkind','jack',' hi ','hi'])
And I need to determine if any of the elements of l ist2 of this array are not in the matrix2. I try to use np.isin() :
np.isin(matrix2,list2)
But this code only shows if there is the elements in the matrix or not completely. But I need this output:
output = ([[False, False, True, False],
[False, True, True, False],
[True, False, False, True]
[True, False, True, False])])
As you can see in the output, the spaces are important too. For example, ‘hi’ without space before and after it is not something that I want it. I need the list by considering spaces between the set of characters.
Could anyone help me to solve this problem?
Answer
You have to loop over the matrix and then over each row of the matrix. Then you have a single string that you have to split to get the individual parts. Now you can check if any of this parts is in list2.
list2 = [‘hi’, ‘ma’, ‘ja’]
matrix2 = [['high','h ight','hi ght','h i g ht'],
['man','ma n','ma th','mat h'],
['ja cket','j a ck et','jack et','ja m'],
['ma nkind','jack',' hi ','hi']]
output = []
for row in matrix2:
new_row = [any(value in list2 for value in element.split()) for element in row]
output.append(new_row)
print(output)
The result is
[[False, False, True, False],
[False, True, True, False],
[True, False, False, True],
[True, False, True, True]]
A minor change that takes the additional constraint of an existing space into account.
output = []
for row in matrix2:
new_row = [any(value in list2 and ' ' in element for value in element.split()) for element in row]
output.append(new_row)
print(output)
This will give you
[[False, False, True, False],
[False, True, True, False],
[True, False, False, True],
[True, False, True, False]]
|
__label__pos
| 0.99169 |
0
I am looking for a way of dealing with the following situation:
1- Have an items collection in MongoDB
2- Have a users collection in DynamoDB
3- Each document in the items collection has a "reference" to a user or multiple users via an id attribute of the DynamoDB.
4- I need to query the items collection and populate each item with metadata from the user table (In each item, I have to replace the id reference/s with user metadata)
5- The DynamoDB has to remain as the main source of truth for the users, and can't move the MongoDB data over.
I'm wondering what would be the best solution for this.
At the moment I'm thinking of two options:
• One is getting the items from MongoDB, getting the users ids, query all the users from DynamoDB and inserting the user's data in the items. As the items are somewhat a complex collection, there is a lot of potential to introduce new bugs and making it hard to maintain in the future.
• Second one is copying the DynamoDB users to MongoDB and querying directly in MongoDB. The issue here is that the collection has to be kept in sync all the time and I'm not convinced about having the same data in different places due to the increased complexity.
As no approach is 100% perfect, just wondering if there could be a better way to do this.
Which option (or other) would be the way to go?
5
• thanks, @DocBrown, I just edited it to make it more clear to I need to query the items collection and populate each item with metadata from the user table. In each item, I have to replace the id reference with user metadata.
– alvm
Sep 24, 2021 at 13:27
• Thanks. Another thing I don't understand is why, in your first approach, the complexity of items has more potential to introduce bugs than your second. I mean, you wrote, regardless of the approach you choose, you have to "populate each item with metadata from the user table" - does "populate each item" mean something different to you than "insert the user's data in the items"?
– Doc Brown
Sep 24, 2021 at 13:53
• @DocBrown No, it means basically that. But is not as straightforward, in any given document of an item you could have several references to a user(s), nested within objects or an array of objects. There is a reasonable risk of missing edge cases that could potentially introduce issues. Also, you could get one item or N items, were getting N items if N is large enough, the performance could take a hit as you have to iterate every item/property and insert the user(s) whenever is needed.
– alvm
Sep 24, 2021 at 14:11
• Ok, but it seems in both approaches, you get user data from the DynamoDB and store it inside the MongoDB, right?`In the first approach, you store it inside the item documents directly, in the second, you store it in a separate user collection. In both cases, you have to make sure the user data is up-to-date in the MongoDB, so where is the difference here?
– Doc Brown
Sep 24, 2021 at 14:37
• @DocBrown, no, the items collection only have a reference with an Id of the user. The population of the users, in this case, is done when a read request is made. The flow when a get request arrives: get items with users id from MongoDB -> query dynamo DB for users metadata -> iterate and populate items with this user metadata data -> return response items with users metadata.
– alvm
Sep 24, 2021 at 22:31
1 Answer 1
0
About the first approach you suggested, the developers are responsible for the document structure, and if a new key is added to the structure with user id, they should query it too, and if not they are responisble for this bug. (Also in the second approach it will happen)
Pros:
1. you dont need to sync data between databases
2. the users info will be updated (in the second approach it will be eventually consistent with some delay)
Cons:
1. each query is depending on two databases, but its not really issue and it can be solved by cloning the dbs
About the second, you should use a queue like kafka/rabbitmq and transfer updates on the dynamodb to mongodb.
Pros:
1. If dynamodb has a failover the data is still accessible
Cons:
1. there can be a delay of the current users data
2. Adding complexity to the architecture
3. The sync mechanism adds a lot of calls for both dbs
I think the first approach is better.
But i dont think the architecture is good enough.. seems like one relational db is classic for here (but i dont know the full details and constraints)
1
• Yes, thanks for your time. I am a bit constrained, but this gives me ideas of how to approach it!
– alvm
Nov 1, 2021 at 3:00
Your Answer
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.593162 |
Beefy Boxes and Bandwidth Generously Provided by pair Networks Bob
Perl-Sensitive Sunglasses
PerlMonks
Re^2: Inline: where did the output go?
by andye (Curate)
on Jun 12, 2007 at 14:25 UTC ( #620734=note: print w/ replies, xml ) Need Help??
in reply to Re: Inline: where did the output go?
in thread Inline: where did the output go?
Hi cdarke,
I'm trying to work up a minimal example to show the problem - the issue I've got now is that when I make the example very minimal, the problem goes away. :)
You may be right that it's a buffering problem though: I've written an example script which prints a line from Perl, then a line from C, then a line from Perl. On my Mac the lines appear in the file in the expected way, but on the Linux machine where I've originally found the problem, they appear in this order:
1 Perl
3 Perl
2 C
(should be 1 Perl, 2 C, 3 Perl)
and putting a fflush(stdout) in the right place solves that problem...
...ok, so let's try doing that in the real script... success! Hooray!
Triple++ (if I could) to cdarke, and many thanks.
Best wishes, andye
Comment on Re^2: Inline: where did the output go?
Re^3: Inline: where did the output go?
by Anonymous Monk on Jun 12, 2007 at 14:36 UTC
Hummm. Not sure I deserve that. The buffer should be flushed on exit from the program, regardless of whether it goes to the screen or a file.
Ah, but (and I should maybe have mentioned this before) the process runs for a long time, i.e. a couple of days.
It's a long batch-processing job. (That's why I'm using Pdlpp to code part of it, because I need to get up to C speed otherwise it takes forever).
So,
- need to get output during the program run, as the point of the output is to show progress.
- when I've been testing it to see if anything was showing up in the output file, I've been waiting a while and when nothing appeared I've been killing the process.
Best wishes, andye
Log In?
Username:
Password:
What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://620734]
help
Chatterbox?
and the web crawler heard nothing...
How do I use this? | Other CB clients
Other Users?
Others chilling in the Monastery: (14)
As of 2014-03-14 15:30 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
Have you used a cryptocurrency?
Results (307 votes), past polls
|
__label__pos
| 0.703522 |
Go Down
Topic: Arduino Due Benchmark - Newton Approximation for Pi (Read 15150 times) previous topic - next topic
securd
Dec 23, 2012, 08:15 pm Last Edit: Dec 24, 2012, 03:45 am by Coding Badly Reason: 1
After receiving my Due, I wanted to benchmark its numeric processing power as compared to the Mega. I was also interested in understanding which of my standard Arduino shields and gadgets were compatible with the 3.3v I/O of the Due.
Being a recovering physicist, what better test than to approximate pi, and periodically display the approximation results on 1638 display!
I utilized the slowly-converging Newton Approximation for pi, which does a reasonably good job of calculating pi / 4, using the infinite series 1 - 1/3 + 1/5- 1/7 + 1/9 - 1/11 +...
This is simple to express in Arduino C, and uses the floating point libraries to further assess performance. I tested a Due and a Mega, connected to a JY-LKM1638 V1.2 display module (which is based on the TM1638 controller chip). The times required to traverse 100,000 iterations were as follows:
Due: 1785 ms
Mega: 6249 ms
(roughly 3.5x performance difference)
Upping ITERATIONS to 10000000 (ten million) gets pi accurate to about 8 significant digits!
Photo attached, and sketch follows:
-----
Code: [Select]
//
// Pi_2
//
// Steve Curd
// December 2012
//
// This program approximates pi utilizing the Newton's approximation. It quickly
// converges on the first 5-6 digits of precision, but converges verrrry slowly
// after that. For example, it takes over a million iterations to get to 7-8
// significant digits.
//
// For demonstration purposes, drives a JY-LKM1638 display module to show the
// approximated value after each 1,000 iterations, and toggles the pin13 LED for a
// visual "sign of life".
//
// I wrote this to evaluate the performance difference between the 8-bit Arduino Mega,
// and the 32-bit Arduino Due.
//
// Benchmark results for 100,000 iterations (pi accurate to 5 significant digits):
//
// Due: 1785 ms
// Mega: 6249 ms
//
// 1638 display module connections:
// VCC -> 3.3v
// GND -> GND
// DIO -> Pin 8
// CLK -> Pin 9
// STB0 -> Pin 7
//
//
#define ITERATIONS 20000000L // number of iterations
#define FLASH 1000 // blink LED every 1000 iterations
#include <TM1638.h>
// TM1638 module(DIO, CLK, STB0)
TM1638 module(8, 9, 7);
void setup() {
pinMode(13, OUTPUT);
Serial.begin(57600);
}
void loop() {
unsigned long start, time;
unsigned long niter=ITERATIONS;
int LEDcounter = 0;
boolean alternate = false;
unsigned long i, count=0; /* # of points in the 1st quadrant of unit circle */
double x = 1.0;
double temp, pi=1.0;
start = millis();
Serial.print("Beginning ");
Serial.print(niter);
Serial.println(" iterations...");
Serial.println();
count=0;
for ( i = 2; i < niter; i++) {
x *= -1.0;
pi += x / (2.0*(double)i-1);
if (LEDcounter++ > FLASH) {
LEDcounter = 0;
if (alternate) {
digitalWrite(13, HIGH);
alternate = false;
} else {
digitalWrite(13, LOW);
alternate = true;
}
temp = 40000000.0 * pi;
module.setDisplayToDecNumber( temp, 0x80);
}
}
time = millis() - start;
pi = pi * 4.0;
Serial.print("# of trials = ");
Serial.println(niter);
Serial.print("Estimate of pi = ");
Serial.println(pi, 10);
Serial.print("Time: "); Serial.print(time); Serial.println(" ms");
delay(10000);
}
Moderator edit: [code] [/code] tags added.
stimmer
There might be an imbalance between your tests as on a Uno/Mega double is only really a float (32 bits) but on the Due I think it is full double precision (64 bits). Try it again using float on the Due, you should still get 7 digits of precision out of it.
Due VGA library - http://arduino.cc/forum/index.php/topic,150517.0.html
securd
Ok, well, this is strange. I changed all of the doubles to floats. The Mega had exactly the same time (which reinforces your point). However the time for the Due actually went UP.
For due...
double: 1785 ms
float: 2056 ms
I haven't had time to go through the floating point library, but do you suppose the Due handles all floating points as doubles by default?
Steve.
stimmer
#3
Dec 23, 2012, 09:11 pm Last Edit: Dec 23, 2012, 09:14 pm by stimmer Reason: 1
No, it's just a 'feature' of C++ which makes it quite hard to get it to do calculations as floats - basically any floating point constant is considered to be a double, then because the calculation has one double in it the whole thing gets done in double precision. To get round it you have to put f after the constants, ie:
pi += x / (2.0f*(float)i-1.0f);
I am getting 675ms for 100000 iterations (3.1416058540), although I have removed the display driver code.
Due VGA library - http://arduino.cc/forum/index.php/topic,150517.0.html
robtillaart
reminds me of another PI tester that gives an indication of the quality of the random generator;
throw a dart on a square board with a (maximized) circle on it. The chance it is in the circle is PI/4
Code: [Select]
//
// FILE: pi.pde
// AUTHOR: Rob Tillaart
// DATE: 2011
//
// PUPROSE: approx pi by 'darting' randomly
//
float pi = 0;
long in = 0;
long out = 0;
int x = 0;
int y = 0;
int xx = 0;
int yy = 0;
void setup()
{
Serial.begin(115200);
}
void loop()
{
x = y; //random(0,101);
xx = yy;
y = random(0,101);
yy = y * y;
if (xx + yy <= 10000) in++;
out++;
if ((out % 10000) == 0) Serial.println(4.0 * in / out, 7);
}
Rob Tillaart
Nederlandse sectie - http://arduino.cc/forum/index.php/board,77.0.html -
(Please do not PM for private consultancy)
securd
#5
Dec 24, 2012, 12:08 am Last Edit: Dec 24, 2012, 03:46 am by Coding Badly Reason: 1
Hi Stimmer,
That was it -- thank you! The new benchmark numbers, with the LED blink and display logic enabled:
Using float precision, 100,000 iterations with display:
Mega: 6249 ms
Due: 821 ms (Due is about 7.5x faster)
Using double precision, 100,000 iterations with display:
Mega: 6249 ms (obviously still using float)
Due: 1780 ms (actually using double)
(Using double precision on the Due, 8 million iterations will yield about 8 digits of precision in 143,000 ms. Wow, Newton converges slowly for high levels of precision...)
# of trials = 8000000
Estimate of pi = 3.1415927786
Time: 143102 ms
Below is the code utilizing float, and runs on Mega and Due for apples-to-apples performance comparison:
Code: [Select]
//
// Pi_2
//
// Steve Curd
// December 2012
//
// This program approximates pi utilizing the Newton's approximation. It quickly
// converges on the first 5-6 digits of precision, but converges verrrry slowly
// after that. For example, it takes over a million iterations to get to 7-8
// significant digits.
//
// For demonstration purposes, drives a JY-LKM1638 display module to show the
// approximated value after each 1,000 iterations, and toggles the pin13 LED for a
// visual "sign of life".
//
// I wrote this to evaluate the performance difference between the 8-bit Arduino Mega,
// and the 32-bit Arduino Due.
//
// Benchmark results for 100,000 iterations (pi accurate to 5 significant digits):
// Mega: 6249 ms
// Due: 821 ms (Due is about 7.5x faster)
//
// 1638 display module connections:
// VCC -> 3.3v
// GND -> GND
// DIO -> Pin 8
// CLK -> Pin 9
// STB0 -> Pin 7
//
//
#define ITERATIONS 100000L // number of iterations
#define FLASH 1000 // blink LED every 1000 iterations
#include <TM1638.h> // include the display library
// TM1638 module(DIO, CLK, STB0)
TM1638 module(8, 9, 7);
void setup() {
pinMode(13, OUTPUT); // set the LED up to blink every 1000 iterations
Serial.begin(57600);
}
void loop() {
unsigned long start, time;
unsigned long niter=ITERATIONS;
int LEDcounter = 0;
boolean alternate = false;
unsigned long i, count=0;
float x = 1.0;
float temp, pi=1.0;
Serial.print("Beginning ");
Serial.print(niter);
Serial.println(" iterations...");
Serial.println();
start = millis();
for ( i = 2; i < niter; i++) {
x *= -1.0;
pi += x / (2.0f*(float)i-1.0f);
if (LEDcounter++ > FLASH) {
LEDcounter = 0;
if (alternate) {
digitalWrite(13, HIGH);
alternate = false;
} else {
digitalWrite(13, LOW);
alternate = true;
}
temp = 40000000.0 * pi;
module.setDisplayToDecNumber( temp, 0x80);
}
}
time = millis() - start;
pi = pi * 4.0;
Serial.print("# of trials = ");
Serial.println(niter);
Serial.print("Estimate of pi = ");
Serial.println(pi, 10);
Serial.print("Time: "); Serial.print(time); Serial.println(" ms");
delay(10000);
}
Moderator edit: [code] [/code] tags added.
decrux
Cool project,
maybe adding something that measures the RAM in use would enrich your benchmark.
Greetings
stan_w_gifford
Ran the last example on Arduino edison.........
11 ms
Stan
PCWorxLA
I haven't had time to go through the floating point library, but do you suppose the Due handles all floating points as doubles by default?
Steve.
Ok, well, this is strange. I changed all of the doubles to floats. The Mega had exactly the same time (which reinforces your point).
The official Arduino docs clearly state that for AVR, double is the same as float, likewise 32bit with 6-7 digits of precsion.
Quote
However the time for the Due actually went UP.
For due...
double: 1785 ms
float: 2056 ms
I haven't had time to go through the floating point library, but do you suppose the Due handles all floating points as doubles by default?
As the Cortex-M3 core (on which the SAM3X used in the Due is based on) doesn't have a FPU (only the M4 has an optional 32bit FPU, and the M7 has an optional 32bit or 64bit FPU) and in order not to implement all functions to emulate FP operations in both 32bit and 64bit, it is probably safe to assume that all FP operations are indeed by default 64bit, eventually truncated to 32bit. That would explain the relatively small increase when using floats vs double on the Due, as each parameter and result has to converted to 64bit and back to 32bit...
Ralf
JovanEps
Hi All,
as PCWorxLA pointed right (some 3 years before) ARM Coretex M4..M7 MCU's have SP and DP FPU units that do speed up securd's code.
On STM32 Nucelo F767 board with ARM M7 MCU (@216MHz) without LCD display module code with float (SP - 32bit FPU) make 100k run in less then 15ms
Code: [Select]
Beginning 100000
iterations...
# of trials = 100000
Estimate of pi = 3.1416058540 10
Time: 14 ms
Beginning 100000
iterations...
# of trials = 100000
Estimate of pi = 3.1416058540 10
Time: 15 ms
I made some modification of that code to be able to compare its results with PC (single thread C++) performance, and ESP32 (NodeMCU-32 board). Number of runs was increased to 10M and LED blinking is omitted or enabled on 100k-1M step.
You can find more mbed code for STM32 Nucleo F7 on :
https://developer.mbed.org/users/JovanEps/code/Newton_s_approximation_bench/
This 10M iterations version on PC, ARM M7 and ESP 32 give:
Code: [Select]
----------------------
----- Nucleo M7 SP FPU -10M
----------------------
Beginning 10000000
iterations...
# of trials = 10000000
Estimate of pi = 3.1415936536 10
Time: 1204 ms
----------------------
----- Nucleo M7 DP FPU -10M
----------------------
Beginning 10000000
iterations...
# of trials = 10000000
Estimate of pi = 3.1415936536 10
Time: 2061 ms
-----------------------------------------
----------------------
----- ESP 32 SP FPU -10M
----------------------
Beginning 10000000 iterations...
# of trials = 10000000
Estimate of pi = 3.1415936536
Time: 2810 ms
----------------------
----- ESP 32 DP FPU -10M
----------------------
Beginning 10000000 iterations...
# of trials = 10000000
Estimate of pi = 3.1415936536
Time: 33497 ms
---------------------------------
----------------------
----- PC 1x thread on AMD Phenom II X6 1100T - 3.3GHz- SP/DP FPU -10M
----------------------
Beginning 10000000
iterations...
# of trials = 10000000
Estimate of pi = 3.1415927536 10
time: 0.094005 sec.
state: 0
Process returned 0 (0x0) execution time : 0.146 s
Press any key to continue.
Arduino IDE code for ESP32
>
Code: [Select]
#define ITERATIONS 10000000L // number of iterations 100k-10M
#define FLASH 100000 // blink LED every 100k-1M iterations
void setup() {
pinMode(13, OUTPUT); // set the LED up to blink every 100k-1M iterations
pinMode(LED_BUILTIN, OUTPUT);
Serial.begin(9600);
}
void loop() {
unsigned long start, ttime;
unsigned long niter = ITERATIONS;
int LEDcounter = 0;
boolean alternate = false;
unsigned long i, count = 0;
double x = 1.0; //double
double temp, pi = 1.0; //double
digitalWrite(LED_BUILTIN, LOW);
Serial.print("Beginning ");
Serial.print(niter);
Serial.println(" iterations...");
Serial.println();
digitalWrite(LED_BUILTIN, HIGH);
start = millis();
for ( i = 2; i < niter; i++) {
x *= -1.0;
pi += x / (2.0f * (double)i - 1.0f); //double
if (LEDcounter++ > FLASH) {
LEDcounter = 0;
if (alternate) {
//digitalWrite(13, HIGH);
digitalWrite(LED_BUILTIN, HIGH);
alternate = false;
} else {
//digitalWrite(13, LOW);
digitalWrite(LED_BUILTIN, LOW);
alternate = true;
}
temp = (float) 40000000.0 * pi;
}
}
ttime = millis() - start;
pi = pi * 4.0;
digitalWrite(LED_BUILTIN, LOW);
Serial.print("# of trials = ");
Serial.println(niter);
Serial.print("Estimate of pi = ");
Serial.println(pi, 16);
Serial.print("Time: "); Serial.print(ttime); Serial.println(" ms");
delay(3000);
}
If you want to read more about performances of new 32bit IoT ready MCU's check this paper:
http://www.eventiotic.com/eventiotic/library/paper/326?event=0
or
https://www.researchgate.net/publication/316173015_Analysis_of_the_performance_of_the_new_generation_of_32-bit_Microcontrollers_for_IoT_and_Big_Data_Application
Go Up
|
__label__pos
| 0.655238 |
Class 6 Factorization MCQs Quiz and Answers Tests pdf Download
Practice class 6 factorization MCQs in math quiz for test prep. Fundamental algebra quiz questions has multiple choice questions (MCQ) with class 6 factorization test, answers as the answer of factorization of expression 4z(3a + 2b - 4c) + (3a + 2b - 4c) is, answer key with choices as (4z + 1)(3a + 2b -4c), (4z + 1) + (3a + 2b -4c), (4z + 1) - (3a + 2b -4c) and (4z - 1)(3a - 2b -4c) for competitive exam preparation worksheets. Free math revision notes to learn class 6 factorization quiz with MCQs to find questions answers based online tests.
MCQs on Class 6 Factorization Quiz pdf Download
MCQ. Answer of factorization of expression 4z(3a + 2b - 4c) + (3a + 2b - 4c) is
1. (4z + 1)(3a + 2b -4c)
2. (4z + 1) + (3a + 2b -4c)
3. (4z + 1) - (3a + 2b -4c)
4. (4z - 1)(3a - 2b -4c)
A
MCQ. Answer of factorization of expression (3x - 2y)(a + b) + (4x - 3y)(a + b) is
1. (5x + 5y)(a + b)
2. (5x + 5y) - (a + b)
3. (6x - 8y) + (a - b)
4. (5x - 5y)(a + b)
D
MCQ. Factorization of p(4q + 3) +4(4q+3) leads to
1. (p - 4)(4q - 3)
2. 4pq + 12q
3. (p + 4)(4q + 3)
4. 4pq + 12 q
C
MCQ. By factorizing 3x - 9xy - 12xz, answer will be
1. 3(x - 3y + 4z)
2. 3(x + 3y + 4z)
3. 3(x + 3y - 4z)
4. 3(x - 3y - 4z)
D
MCQ. By factorization of (17x - 34), answer must be
1. x(17-2)
2. x(17 + 2)
3. 17(x - 2)
4. 17(x + 2)
C
Biosphere Video
|
__label__pos
| 0.997929 |
Q:
How do you connect a laptop to a TV?
A:
Quick Answer
You can connect your laptop to a TV in a number of ways, depending on the connections on both devices. If your laptop and your television share the same type of connection, you need to purchase the right connecting cord, switch your television to the correct input, then configure your laptop if necessary. If your laptop and computer do not share a connection, you may be able to purchase an adapter that will allow your devices to work together.
Continue Reading
How do you connect a laptop to a TV?
Credit: FETHI BELAID AFP Getty Images
Full Answer
1. Check the ports on your laptop and television
Look for the same type of ports on both the laptop and the television. Common ports that most modern televisions and laptops share include HDMI, VGA, DVI, Display Port and S-Video. Some laptops also have mini-HDMI, mini-DVI and mini Display Ports and will need an adapter to work with a matching port on the television.
2. Connect the corresponding cable
Purchase the corresponding cable, and adapter if necessary, that connects your laptop to the television. Your laptop may need to be turned off before connecting to recognize the connection. You can find all types of connection cords, including HDMI, VGA, DVI and S-Video, at most electronic stores. HDMI cords provide the highest quality video and audio. Other connections may require you to also connect audio cables that have a headphone jack on one end for the laptop and two-channel RCA connection on the other end for the television.
3. Tune your television, and configure your laptop
Tune your television to the correct input channel. The television should display the name of the input when you change it. Turn on your laptop if you have it turned off. Depending on the type of laptop you have, you may have to adjust your laptop's display settings to get the computer screen to display on the television. Some laptops give you the option to have the television mirror exactly what is on your laptop screen or act as an extended display so you can use it for other purposes, like viewing a movie while still using your laptop.
To change the display settings on a Windows laptop, go to the Control Panel, select Display, and click on Adjust Resolution. In the Display drop down box, select the TV. If sound does not initially play on the television, on the laptop go to Control Panel, and select Sound. Then find the television in the list of playback devices, select it, and click Set Default.
The process for connecting a laptop to a television using a media streaming device, such as a Roku or Chromecast, depends on the specific device. In general, they are plugged into the TV's HDMI port and can be accessed by laptops and other devices over Wi-Fi.
Learn more about Computers & Hardware
Related Questions
Explore
|
__label__pos
| 0.619183 |
1
Buenas tardes, soy nuevo en el foro así que me disculpan si realizo mal la pregunta o es una pregunta muy tonta.
Resulta que estoy haciendo una aplicación y tengo un recyclerview donde este posee un edittext el cual lleva la cantidad de productos que voy a solicitar, hasta ahí todo bien, solo que cuando edito la cantidad, no la almacena de una vez en el adapter sino que me queda en la vista hasta que agregue un nuevo elemento que se recorre el adapter y o almacena, en este es mi CustomAdapter
public class AdaptadorPedido extends
RecyclerView.Adapter<AdaptadorPedido.MyHolder> {
Context context;
List<DataAdapter> datos;
String[] etValArr;
String[] Fin;
public AdaptadorPedido(Context context, List<DataAdapter> datos) {
this.context = context;
this.datos = datos;
etValArr = new String[datos.size()];
Fin = new String[datos.size()];
}
@Override
//public MyHolder onCreateViewHolder(ViewGroup parent, int viewType) {
public MyHolder onCreateViewHolder(ViewGroup parent, int viewType) {
View view = LayoutInflater.from(context).inflate(R.layout.row_view, parent, false);
MyHolder mh = new MyHolder(view, new CustomEtListener());
return mh;
//return new MyHolder(view);
}
@Override
public void onBindViewHolder(final MyHolder holder, final int position) {
holder.Nproducto.setText(datos.get(position).getNombre());
holder.Lab.setText(datos.get(position).getLabo());
holder.Lista.setText(datos.get(position).getList());
holder.precio.setText(datos.get(position).getPre());
holder.Cantidad.setText(datos.get(position).getCant());
holder.myCustomEtListener.updatePosition(position);
holder.Cantidad.setText(etValArr[position]);
holder.Nproducto.setOnLongClickListener(new View.OnLongClickListener() {
@Override
public boolean onLongClick(View v) {
AlertDialog.Builder adb = new AlertDialog.Builder(v.getContext());
adb.setTitle("Eliminar?");
adb.setMessage("¿Seguro que desea sacar este producto de la OP ?");
adb.setNegativeButton("No",null);
adb.setPositiveButton("Si",new AlertDialog.OnClickListener(){
public void onClick(DialogInterface dialog, int which){
datos.remove(position);
notifyDataSetChanged();
}
});
adb.show();
return false;
}
});
holder.Lab.setOnLongClickListener(new View.OnLongClickListener() {
@Override
public boolean onLongClick(View v) {
AlertDialog.Builder adb = new AlertDialog.Builder(v.getContext());
adb.setTitle("Eliminar?");
adb.setMessage("¿Seguro que desea sacar este producto de la OP ?");
final int positionToRemove = position;
adb.setNegativeButton("No",null);
adb.setPositiveButton("Si",new AlertDialog.OnClickListener(){
public void onClick(DialogInterface dialog, int which){
datos.remove(position);
notifyDataSetChanged();
}
});
adb.show();
return false;
}
});
}
@Override
public int getItemCount() {
return datos.size();
}
public class MyHolder extends RecyclerView.ViewHolder{
TextView Nproducto, Lab, Lista, precio;
EditText Cantidad;
public CustomEtListener myCustomEtListener;
//public MyHolder(View itemView) {
public MyHolder(View itemView, CustomEtListener myList) {
super(itemView);
Nproducto = (TextView)itemView.findViewById(R.id.tv_Producto);
Lab = (TextView)itemView.findViewById(R.id.tv_Lab);
Lista = (TextView)itemView.findViewById(R.id.tv_Lista);
precio = (TextView)itemView.findViewById(R.id.tv_Precio);
Cantidad = (EditText)itemView.findViewById(R.id.et_cant);
myCustomEtListener = myList;
Cantidad.addTextChangedListener(myCustomEtListener);
}
}
public static class DataAdapter{
String Nombre,Labo, List, Pre, Cant;
public DataAdapter(String Nombre,String Labo,String List,String Pre,String Cant){
this.Nombre = Nombre;
this.Labo = Labo;
this.List = List;
this.Pre = Pre;
this.Cant = Cant;
}
public String getNombre() {
return Nombre;
}
public void setNombre(String nombre) {
Nombre = nombre;
}
public String getLabo() {
return Labo;
}
public void setLabo(String labo) {
Labo = labo;
}
public String getList() {
return List;
}
public void setList(String list) {
List = list;
}
public String getPre() {
return Pre;
}
public void setPre(String pre) {
Pre = pre;
}
public String getCant() {
return Cant;
}
public void setCant(String cant) {
Cant = cant;
}
}
private class CustomEtListener implements TextWatcher{
private int position;
public void updatePosition(int position){
this.position = position;
}
@Override
public void beforeTextChanged(CharSequence s, int start, int count, int after) { }
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
etValArr[position] = s.toString();
}
@Override
public void afterTextChanged(Editable s) {
}
}
}
Agradezco su ayuda.
0
1 respuesta 1
Reset to default
0
Bienvenido a SO en español. Esto puede ser sencillo, al realizar un cambio en el texto dentro del EditText, puedes cambiar el valor del campo Cantidad en su correspondiente elemento de la lista de objetos DataAdapter. Se realizaría de esta forma:
@Override
public void onTextChanged(CharSequence s, int start, int before, int count) {
//etValArr[position] = s.toString();
datos.get(position).setCant(s.toString());
}
Al cargar nuevamente los datos en el RecyclerView del correspondiente elemento cargará el nuevo valor.
2
• Amigo muchas gracias en verdad de gran ayuda, me funciono a la perfeccion, el 8 may. 2017 a las 23:03
• Un placer, esto es lo que se realiza regularmente en el caso de actualizar datos de un Adapter, en realidad modificas los valores en las propiedades de los objetos. ᕦ /͠- ‿ ͝-\ ᕥ Saludos!.
– Jorgesys
el 8 may. 2017 a las 23:04
Tu Respuesta
Al pulsar en “Publica tu respuesta”, muestras tu consentimiento a nuestros términos de servicio, política de privacidad y política de cookies
¿No es la respuesta que buscas? Examina otras preguntas con la etiqueta o formula tu propia pregunta.
|
__label__pos
| 0.989518 |
Performance Profiling Using Chrome Code Snippets
Dr. Gleb Bahmutov PhD
Kensho
Kensho app
When does the page start painting?
(function timeFirstPaint() {
var fp = chrome.loadTimes().firstPaintTime -
chrome.loadTimes().startLoadTime;
console.log('first paint: ' + fp);
}());
How long does the page load?
Code Snippets
find Expensive images
Include missing styles
<h1>
<i class="fa fa-flag"></i> Include missing styles
<i class="fa fa-heart"></i>
</h1>
(function addFontAwesomeCssLink() {
var ss = document.createElement('link');
ss.type = 'text/css';
ss.rel = 'stylesheet';
ss.href = '//maxcdn.bootstrapcdn.com/font-awesome/4.3.0/css/font-awesome.min.css';
document.getElementsByTagName("head")[0].appendChild(ss);
}());
wrapping javascript
function add(a, b) {
return a + b;
}
function doSomething() {
console.log(add(2, 3));
}
doSomething();
// 5
wrapping javascript
function add(a, b) {
return a + b;
}
function doSomething() {
console.log(add(2, 3));
}
// replace add
var _add = add;
add = function () {
console.log('adding', arguments);
return _add.apply(null, arguments);
}
doSomething();
// adding { '0': 2, '1': 3 }
// 5
wrapping javascript
function add(a, b) {
return a + b;
}
function doSomething(add) {
console.log(add(2, 3));
}
setTimeout(doSomething.bind(null, add), 1000);
// replace add
var _add = add;
add = function () {
console.log('adding', arguments);
return _add.apply(null, arguments);
}
doSomething();
// 5
Prefer wrapping methods
function add(a, b) {
return a + b;
}
var calc = {
add: add
};
function doSomething(calc) {
console.log(calc.add(2, 3));
}
setTimeout(doSomething.bind(null, calc), 1000);
var _add = calc.add;
calc.add = function () {
console.log('adding', arguments);
return _add.apply(calc, arguments);
}
// adding { '0': 2, '1': 3 }
// 5
Works in Angular
var selector = 'load';
var methodName = 'load';
var el = angular.element(document.getElementById(selector));
var scope = el.scope() || el.isolateScope();
var fn = scope[methodName];
var $timeout = el.injector().get('$timeout');
var $q = el.injector().get('$q');
scope[methodName] = function () {
console.profile(name);
console.time(name);
// method can return a value or a promise
var returned = fn();
$q.when(returned).finally(function finishedMethod() {
console.timeStamp('finished', methodName);
$timeout(function afterDOMUpdate() {
console.timeStamp('dom updated after', methodName);
console.timeEnd(name);
console.profileEnd();
scope[methodName] = fn;
}, 0);
});
};
Works with prototypes
// tough cases like jQuery plugins
new Photostack(document.getElementById('photostack-3');
function profile(proto, methodName) {
var originalMethod = proto[methodName];
function restoreMethod() {
console.timeEnd(methodName);
proto[methodName] = originalMethod;
}
proto[methodName] = function () {
console.time(methodName);
originalMethod.apply(this, arguments);
restoreMethod();
};
}
// where we want to profile Photostack.prototype._rotate
profile(Photostack.prototype, '_rotate');
js profiling example
my profiling rules
• Profile in a "clean" browser
• profile actual application
• optimize top bottleneck first
Warning signs
• try - catch blocks
• modifying arguments
• arguments = arguments || []
• deleting / adding properties
• delete foo.bar
• calling function with different argument types
function add(a, b) {
return a + b;
}
add(2, 3);
add('foo', 'bar');
Will not be optimized
• eval
• debug
• long functions!
iojs 1.8.1 has more than 300 switches. A lot related to performance tuning
iojs --v8-options
bad flame chart
good flame chart
Find GC evets
var list = [], k;
for (k = 0; k < N; k += 1) {
list.push( ... );
}
Profile memory
avoid gc events
Preallocate memory
// bad
var list = [], k;
for (k = 0; k < N; k += 1) {
list.push( ... );
}
// better
var list = [], k;
list.length = N;
for (k = 0; k < N; k += 1) {
list[k] = ...;
}
Profile memory
avoid gc events with preallocated arrays
time web worker
var worker = new Worker('worker.js');
function renderPrimes(primes) {
var html = primes.map(primeToRow).join('\n');
document.querySelector('#results').innerHTML = html;
}
worker.onmessage = function (e) {
console.log('worker has finished');
renderPrimes(e.data);
};
var primesApp = {
worker: worker,
findFirstPrimes: function (n) {
console.log('finding first', n, 'primes');
worker.postMessage({ cmd: 'primes', n: n });
}
};
document.querySelector('#find').addEventListener('click', function () {
var n = Number(document.querySelector('#n').value);
primesApp.findFirstPrimes(n);
});
Need to time separate actions
var m1 = obj1[methodName1];
var m2 = obj2[methodName2];
obj1[methodName1] = function () {
console.profile('separate');
console.time('separate');
m1.apply(obj1, arguments);
};
obj2[methodName2] = function () {
console.timeEnd('separate');
console.profileEnd('separate');
m2.apply(obj2, arguments);
};
// call with
// obj1 = primesApp.worker, methodName1 = 'postMessage'
// obj2 = primesApp.worker, methodName2 = 'onmessage'
Starting profiler manually seems to severely affect the
performance
What about timeline?
Layout profiler
Paint profiler
Observe paint
Can code snippets be updated?
YES, inception-style
Related
MUST watch: "DevTools: State of the Union" by @addyosmani
performance matters
this presentation at slides.com/bahmutov/code-snippets
Event plug: Great League of Engineers
"What makes a team productive?" talks and panel
April 30th, at Brightcove office
http://www.eventbrite.com/e/great-league-of-engineers-tickets-16657822997
Performance Profiling Using Chrome Code Snippets
By Gleb Bahmutov
Performance Profiling Using Chrome Code Snippets
Chrome DevTools code snippets became my favorite tool when investigating performance bottlenecks in web applications. A JavaScript fragment can be stored as a named snippet in the "Sources" DevTools panel and executed in the current page's context, just as if it were a code executed in the browser's console.
• 4,319
Loading comments...
More from Gleb Bahmutov
|
__label__pos
| 0.988054 |
BuyFindarrow_forward
Elementary Geometry for College St...
6th Edition
Daniel C. Alexander + 1 other
ISBN: 9781285195698
Solutions
Chapter
Section
BuyFindarrow_forward
Elementary Geometry for College St...
6th Edition
Daniel C. Alexander + 1 other
ISBN: 9781285195698
Textbook Problem
1 views
The center of a circle of radius 2 inches is at a distance of 10 inches from the center of a circle of radius length 3 inches. To the nearest tenth of an inch, what is the approximate length of a common internal tangent? Use the hint provided in Exercise 38.
(HINT: use similar triangles to find OD and DP. Then apply the Pythagorean Theorem twice.)
Chapter 6.3, Problem 39E, The center of a circle of radius 2 inches is at a distance of 10 inches from the center of a circle
To determine
To find:
To find length of a common internal tangent.
Explanation
Given that, the center of a circle of radius 2 in. is at a distance of 10 in. from the center of a circle of radius length 3 in.
That is OA¯=2,PB¯=3 and OP¯=10 then AB¯ is common internal tangent.
The diagrammatic representation is given below,
Since AB¯ is common internal tangent therefore, AB¯OA¯ and AB¯PB¯
Using the vertical angle theorem to get the following,
mADO=mPDBmOAD=mPBD
Therefore, mOADmPBD
If two triangles are similar, then the ration of any two corresponding segments (such as altitudes, medians, or angle bisectors) equals the ratio of any two corresponding sides.
OA¯PB¯=OD¯PD¯23=OD¯PD¯OD¯PD¯=233OD¯=2PD¯
We know that,
OP¯=OD¯+PD¯10=OD¯+PD¯PD¯=10OD¯. Since OP¯=10
Substitute this in the above equation to get the following,
Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started
Sect-6.1 P-11ESect-6.1 P-12ESect-6.1 P-13ESect-6.1 P-14ESect-6.1 P-15ESect-6.1 P-16ESect-6.1 P-17ESect-6.1 P-18ESect-6.1 P-19ESect-6.1 P-20ESect-6.1 P-21ESect-6.1 P-22ESect-6.1 P-23ESect-6.1 P-24ESect-6.1 P-25ESect-6.1 P-26ESect-6.1 P-27ESect-6.1 P-28ESect-6.1 P-29ESect-6.1 P-30ESect-6.1 P-31ESect-6.1 P-32ESect-6.1 P-33ESect-6.1 P-34ESect-6.1 P-35ESect-6.1 P-36ESect-6.1 P-37ESect-6.1 P-38ESect-6.1 P-39ESect-6.1 P-40ESect-6.1 P-43ESect-6.2 P-1ESect-6.2 P-2ESect-6.2 P-3ESect-6.2 P-4ESect-6.2 P-5ESect-6.2 P-6ESect-6.2 P-7ESect-6.2 P-8ESect-6.2 P-9ESect-6.2 P-10ESect-6.2 P-11ESect-6.2 P-12ESect-6.2 P-13ESect-6.2 P-14ESect-6.2 P-15ESect-6.2 P-16ESect-6.2 P-17ESect-6.2 P-18ESect-6.2 P-19ESect-6.2 P-20ESect-6.2 P-21ESect-6.2 P-22ESect-6.2 P-23ESect-6.2 P-24ESect-6.2 P-25ESect-6.2 P-26ESect-6.2 P-27ESect-6.2 P-28ESect-6.2 P-29ESect-6.2 P-30ESect-6.2 P-31ESect-6.2 P-32ESect-6.2 P-33ESect-6.2 P-34ESect-6.2 P-35ESect-6.2 P-36ESect-6.2 P-37ESect-6.2 P-38ESect-6.2 P-39ESect-6.2 P-40ESect-6.2 P-41ESect-6.2 P-42ESect-6.2 P-43ESect-6.2 P-44ESect-6.2 P-45ESect-6.2 P-46ESect-6.2 P-47ESect-6.2 P-48ESect-6.2 P-49ESect-6.3 P-1ESect-6.3 P-2ESect-6.3 P-3ESect-6.3 P-4ESect-6.3 P-5ESect-6.3 P-6ESect-6.3 P-7ESect-6.3 P-8ESect-6.3 P-9ESect-6.3 P-10ESect-6.3 P-11ESect-6.3 P-12ESect-6.3 P-13ESect-6.3 P-14ESect-6.3 P-15ESect-6.3 P-16ESect-6.3 P-17ESect-6.3 P-18ESect-6.3 P-19ESect-6.3 P-20ESect-6.3 P-21ESect-6.3 P-22ESect-6.3 P-23ESect-6.3 P-24ESect-6.3 P-25ESect-6.3 P-26ESect-6.3 P-27ESect-6.3 P-28ESect-6.3 P-29ESect-6.3 P-30ESect-6.3 P-31ESect-6.3 P-32ESect-6.3 P-33ESect-6.3 P-34ESect-6.3 P-35ESect-6.3 P-36ESect-6.3 P-37ESect-6.3 P-38ESect-6.3 P-39ESect-6.3 P-40ESect-6.3 P-41ESect-6.3 P-42ESect-6.3 P-43ESect-6.3 P-44ESect-6.3 P-45ESect-6.3 P-46ESect-6.3 P-47ESect-6.3 P-48ESect-6.4 P-1ESect-6.4 P-2ESect-6.4 P-3ESect-6.4 P-4ESect-6.4 P-5ESect-6.4 P-6ESect-6.4 P-7ESect-6.4 P-8ESect-6.4 P-9ESect-6.4 P-10ESect-6.4 P-11ESect-6.4 P-12ESect-6.4 P-13ESect-6.4 P-14ESect-6.4 P-15ESect-6.4 P-16ESect-6.4 P-17ESect-6.4 P-18ESect-6.4 P-19ESect-6.4 P-20ESect-6.4 P-21ESect-6.4 P-22ESect-6.4 P-23ESect-6.4 P-24ESect-6.4 P-25ESect-6.4 P-26ESect-6.4 P-27ESect-6.4 P-28ESect-6.4 P-29ESect-6.4 P-30ESect-6.4 P-31ESect-6.4 P-32ESect-6.4 P-33ESect-6.4 P-34ESect-6.4 P-35ESect-6.4 P-36ESect-6.4 P-37ESect-6.4 P-38ESect-6.4 P-39ESect-6.CR P-1CRSect-6.CR P-2CRSect-6.CR P-3CRSect-6.CR P-4CRSect-6.CR P-5CRSect-6.CR P-6CRSect-6.CR P-7CRSect-6.CR P-8CRSect-6.CR P-9CRSect-6.CR P-10CRSect-6.CR P-11CRSect-6.CR P-12CRSect-6.CR P-13CRSect-6.CR P-14CRSect-6.CR P-15CRSect-6.CR P-16CRSect-6.CR P-17CRSect-6.CR P-18CRSect-6.CR P-19CRSect-6.CR P-20CRSect-6.CR P-21CRSect-6.CR P-22CRSect-6.CR P-23CRSect-6.CR P-24CRSect-6.CR P-25CRSect-6.CR P-26CRSect-6.CR P-27CRSect-6.CR P-28CRSect-6.CR P-29CRSect-6.CR P-30CRSect-6.CR P-31CRSect-6.CR P-32CRSect-6.CR P-33CRSect-6.CR P-34CRSect-6.CR P-35CRSect-6.CT P-1CTSect-6.CT P-2CTSect-6.CT P-3CTSect-6.CT P-4CTSect-6.CT P-5CTSect-6.CT P-6CTSect-6.CT P-7CTSect-6.CT P-8CTSect-6.CT P-9CTSect-6.CT P-10CTSect-6.CT P-11CTSect-6.CT P-12CTSect-6.CT P-13CTSect-6.CT P-14CTSect-6.CT P-15CTSect-6.CT P-16CT
Additional Math Solutions
Find more solutions based on key concepts
Show solutions add
In Exercises 15-22, use the laws of logarithms to solve the equation. log5(2x+1)log5(x2)=1
Finite Mathematics for the Managerial, Life, and Social Sciences
Polynomial Inequalities Solve the inequality. 16. x2(7 6x) 1
Precalculus: Mathematics for Calculus (Standalone Book)
In Exercises 1124, find the indicated limits, if they exist. 15. limx2x+3x29
Applied Calculus for the Managerial, Life, and Social Sciences: A Brief Approach
Finding an Indefinite Integral In Exercises 45-54, find the indefinite integral. coshxsinhxdx
Calculus: Early Transcendental Functions (MindTap Course List)
Evaluate the integrals in Problems 1-32. 29.
Mathematical Applications for the Management, Life, and Social Sciences
The length of the curve given by x = 3t2 + 2, y = 2t3, is:
Study Guide for Stewart's Multivariable Calculus, 8th
True or False: converges conditionally.
Study Guide for Stewart's Single Variable Calculus: Early Transcendentals, 8th
Find all real solutions of each equation. x+19x2=3
College Algebra (MindTap Course List)
|
__label__pos
| 0.611343 |
Code and data directory (codedir)
Sections
The codedir is the main directory for Puppet code and data. It is used by Puppet master and Puppet apply, but not by Puppet agent. It contains environments (which contain your manifests and modules), a global modules directory for all environments, and your Hiera data and configuration.
Location
The codedir is located in one of the following locations:
• *nix: /etc/puppetlabs/code
• *nix non-root users: ~/.puppetlabs/etc/code
• Windows: %PROGRAMDATA%\PuppetLabs\code (usually C:\ProgramData\PuppetLabs\code)
When Puppet is running as root, a Windows user with administrator privileges, or the puppet user, it uses a system-wide codedir. When running as a non-root user, it uses a codedir in that user's home directory.
When running Puppet commands and services as root or puppet, you should usually use the system codedir. To use the same codedir as the Puppet agent, or Puppet master, run admin commands such as puppet module with sudo.
Note: Running the master as a Rack application is deprecated. When Puppet master is running as a Rack application, the config.ru file must explicitly set --codedir to the system codedir. The example config.ru file provided with the Puppet source does this.
To configure the location of the codedir, set the codedir setting in your puppet.conf file, such as:
codedir = /etc/puppetlabs/code
Important: Puppet Server doesn't use the codedir setting in puppet.conf, and instead uses the jruby-puppet.master-code-dir setting in puppetserver.conf. When using a non-default codedir, you must change both settings.
Interpolation of $codedir
The value of the codedir is discovered before other settings, so you can refer to it in other puppet.conf settings by using the $codedir variable in the value. For example, the $codedir variable is used as part of the value for the environmentpath setting:
[master]
environmentpath = $codedir/override_environments:$codedir/environments
This allows you to avoid absolute paths in your settings and keep your Puppet-related files together.
Contents
The codedir contains environments, including manifests and modules, a global modules directory for all environments, Hiera data, and Hiera's configuration file, hiera.yaml.
The code and data directories are:
• environments: Contains alternate versions of the modules and manifests directories, to enable code changes to be tested on smaller sets of nodes before entering production.
• modules: The main directory for modules.
Puppet sites use proprietary and third-party cookies. By using our sites, you agree to our cookie policy.
|
__label__pos
| 0.703601 |
acurate acurate - 1 year ago 73
Javascript Question
Is there a way to add different textures to an object with TextureLoader.load
I would like to add different textures to each face of a box but I am not sure if
loader.load
is the way to do it, right now I have:
loader.load('img/brick.jpg', function ( texture ){
var boxGeometry = new THREE.BoxGeometry( 3, 3, 3 );
var boxMaterial = new THREE.MeshLambertMaterial({
map: texture,
overdraw: 10
});
var box = new THREE.Mesh( boxGeometry, boxMaterial );
box.castShadow = true;
scene.add(box);
}
Is it possible to add more images in the loader.load or do I have to use a different method?
Answer Source
You can just load an image with loader.load, and store it in a variable:
var loader = new THREE.TextureLoader();
var brick = loader.load('img/brick.jpg');
var occlusion = loader.load('img/ao.jpg'); //Example texture
//More textures here
You can then apply it like so:
var boxGeometry = new THREE.BoxGeometry( 3, 3, 3 );
var boxMaterial = new THREE.MeshLambertMaterial({
map: brick,
aoMap: occlusion, //An example use
overdraw: 10
});
var box = new THREE.Mesh( boxGeometry, boxMaterial );
box.castShadow = true;
scene.add(box);
Instead of loading the texture and using an anonymous callback, just load the texture, store it in a variable, then apply where needed.
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.935027 |
Default Kit
• Home
• Advantage
• Contact Us
Cat5e vs Cat6: Which Network Cable is Right for You?
When it comes to setting up a computer network, choosing the right type of network cable is crucial. Two of the most commonly used network cables today are Cat5e and Cat6. In this blog post, we’ll explore the key differences between Cat5e and Cat6 cables and examine the advantages and disadvantages of using each.
Differences Between Cat5e and Cat6 Network Cables:
When it comes to choosing the right type of network cable for your needs, understanding the differences between Cat5e and Cat6 is crucial. Both Cat5e and Cat6 are types of twisted pair cables used for transmitting data between devices on a computer network. However, they differ in several important ways.
Cat5e vs Cat6: Which Network Cable is Right for You?
Data Transfer Rate:
The main difference between Cat5e and Cat6 is their data transfer rate. Cat5e can transmit data at speeds up to 1000Mbps (megabits per second), while Cat6 can transmit data at speeds up to 10Gbps (gigabits per second). This means that Cat6 cables can handle significantly more data and are ideal for high-bandwidth applications, such as streaming video or online gaming.
Frequency:
Another key difference between Cat5e and Cat6 is the frequency at which they operate. Cat5e cables operate at frequencies up to 100MHz, while Cat6 cables operate at frequencies up to 250MHz. This higher frequency means that Cat6 cables are less susceptible to crosstalk and other forms of interference, making them more reliable and consistent for demanding applications.
Construction:
Cat6 cables are also constructed differently from Cat5e cables. Cat6 cables have thicker copper conductors, a tighter twist ratio, and stricter manufacturing standards. These differences make Cat6 cables more reliable and better suited to high-bandwidth applications, as they can handle more data with less interference.
Advantages and Disadvantages of Using Cat5e and Cat6 Cables:
The choice between Cat5e and Cat6 depends on the specific needs of your network. Here are some advantages and disadvantages of using each type of cable:
Cat5e vs Cat6: Which Network Cable is Right for You?
Advantages of Cat5e:
Cheaper than Cat6
Suitable for most home and small business networks
Can transmit data at speeds up to 1000Mbps
Disadvantages of Cat5e:
Limited bandwidth compared to Cat6
Less reliable and more susceptible to crosstalk and other forms of interference
Not suitable for high-bandwidth applications, such as streaming video or online gaming
Advantages of Cat6:
Higher bandwidth and faster data transfer rates than Cat5e
More reliable and less susceptible to interference than Cat5e
Suitable for high-bandwidth applications, such as streaming video or online gaming
Disadvantages of Cat6:
More expensive than Cat5e
May not be necessary for small networks or home use
Requires compatible hardware to take advantage of its full capabilities
Contact Owire to upgrade your network cables
If you need help choosing the right type of network cable for your business or home network, contact us today. Our team of experts can provide guidance on which cable is best suited for your specific needs and can assist with installation and setup. Upgrade your network cable today!
Share This Post
Contact Now
Type
How would you like to be contacted?
* We respect your privacy. When you submit your contact information, we agree to only contact you in accordance with our Privacy Policy.
|
__label__pos
| 0.698603 |
Warning: Uninitialized string offset 0 in C:\Wordpress_Sites\certcertification.com\wp-includes\comment-template.php on line 1
Warning: Uninitialized string offset 0 in C:\Wordpress_Sites\certcertification.com\wp-includes\comment-template.php on line 1
Warning: Uninitialized string offset 0 in C:\Wordpress_Sites\certcertification.com\wp-includes\class-wp-http-curl.php on line 1
Warning: Uninitialized string offset 0 in C:\Wordpress_Sites\certcertification.com\wp-includes\class-wp-http-curl.php on line 1
Warning: Uninitialized string offset 0 in C:\Wordpress_Sites\certcertification.com\wp-includes\blocks\image.php on line 1
Warning: Uninitialized string offset 0 in C:\Wordpress_Sites\certcertification.com\wp-includes\blocks\image.php on line 1
Warning: Uninitialized string offset 0 in C:\Wordpress_Sites\certcertification.com\wp-includes\block-patterns.php on line 1
Warning: Uninitialized string offset 0 in C:\Wordpress_Sites\certcertification.com\wp-includes\block-patterns.php on line 1
Demystifying Cisco Cybersecurity Operations Fundamentals
Introduction
Welcome to the world of cybersecurity, where virtual battles are fought and won every day! In our digitally-driven society, protecting sensitive information and securing networks is of paramount importance. And that’s where Cisco Cybersecurity Operations (CBROPS) comes into play. Whether you’re a beginner just dipping your toes into this fascinating field or an experienced professional looking to enhance your skills, this beginner’s guide will demystify the fundamentals of Cisco Cybersecurity Operations and set you on the path to becoming a skilled defender against cyber threats.
So grab your virtual armor, sharpen your digital weapons, and let’s dive deep into the realm of Cisco Cybersecurity Operations Fundamentals (CBROPS)! But before we do that, let’s first understand what exactly it entails.
Understanding Cisco Cybersecurity Operations Fundamentals (CBROPS)
Cybersecurity is a rapidly growing field that plays a crucial role in protecting organizations from malicious threats and cyber-attacks. In today’s digital landscape, it is essential for companies to have robust cybersecurity measures in place to safeguard their sensitive data and ensure the continuity of their operations.
Cisco, a leading technology company, offers comprehensive solutions in the realm of cybersecurity operations (CBROPS). Cisco Cybersecurity Operations Fundamentals (200-201) focuses on identifying, monitoring, and responding to potential security incidents within an organization’s network infrastructure. By leveraging advanced technologies and threat intelligence capabilities, Cisco helps organizations stay one step ahead of cybercriminals.
At the heart of every successful cybersecurity operation (CBROPS) is the role of a Cybersecurity Operations Analyst. These professionals are responsible for analyzing network traffic patterns, investigating security incidents, implementing preventive measures, and developing incident response plans. They play a critical role in maintaining the integrity and confidentiality of an organization’s data.
In order to effectively combat emerging threats, it is important to understand common vulnerabilities that can be exploited by attackers. From phishing attacks targeting unsuspecting employees to vulnerabilities in software systems or misconfigured devices—organizations must be proactive in mitigating these risks.
Cisco offers a range of security technologies designed to protect against various types of threats. From firewalls and intrusion prevention systems (IPS) to secure email gateways and endpoint protection platforms—they provide holistic solutions tailored to meet specific organizational requirements.
Implementing an effective security operations process is crucial for organizations seeking optimal protection against cyber threats. This involves continuous monitoring for suspicious activities or signs of compromise through tools such as Security Information Event Management (SIEM). Additionally, having well-defined incident response procedures ensures timely detection and mitigation when breaches occur.
Maintaining a secure network requires adherence to best practices such as regular patch management updates across all devices within an organization’s infrastructure. Enforcing strong password policies along with multi-factor authentication adds an extra layer of defense against unauthorized access attempts.
The field of Cisco Cybersecurity Operations presents numerous career opportunities for individuals interested in the fast-paced world.
The Role of a Cybersecurity Operations Analyst
In the dynamic and ever-evolving world of cybersecurity, organizations need skilled professionals who can actively protect their networks from threats. One such role is that of a Cybersecurity Operations Analyst. These individuals play a crucial role in identifying, analyzing, and responding to security incidents.
A Cybersecurity Operations Analyst is responsible for monitoring network traffic and systems logs to detect any signs of unauthorized access or malicious activity. They use sophisticated tools and technologies to analyze data and identify potential threats that may compromise the organization’s security posture.
Once a threat is identified, the analyst investigates its origin, scope, and impact on the network infrastructure. This involves conducting forensic analysis, examining log files, and collaborating with other teams to gather relevant information.
Based on their findings, analysts develop strategies to mitigate vulnerabilities within the system. They work closely with IT administrators to implement necessary patches or updates while ensuring minimal disruption to normal business operations.
Additionally, these professionals assess existing security measures in place by conducting vulnerability assessments and penetration tests. This helps them identify weaknesses within the system that could be exploited by attackers.
Cybersecurity Operations Analysts also play an essential role in incident response. In case of a breach or attack, they spring into action by containing the incident, eradicating any traces left behind by hackers, restoring affected systems back to normalcy as quickly as possible,and implementing preventive measures for future incidents.
To excel in this role, Cybersecurity Operations Analysts must have strong analytical skills along with knowledge of various networking protocols, databases, and operating systems.
They should stay updated about emerging cyber threats,trends,and industry best practices.
A proactive mindset coupled with excellent communication skills are also important traits for success in this field.
By assuming this critical position,you contribute significantly towards maintaining your organization’s cybersecurity posture,safeguarding sensitive data,and protecting valuable assets from various evolving threats.
Common Threats and Vulnerabilities in Cybersecurity
In today’s digital landscape, the need for robust cybersecurity measures has become paramount. As technology advances, so do the tactics used by cybercriminals to exploit vulnerabilities and gain unauthorized access to sensitive information. Understanding common threats and vulnerabilities is essential for any organization looking to protect its network.
One prevalent threat is malware, malicious software designed to damage or gain unauthorized access to a computer system. With various forms such as viruses, worms, and ransomware, malware can infiltrate networks through phishing emails or infected websites.
Another significant vulnerability lies in unpatched systems. Software vendors regularly release updates that address security flaws discovered in their products. Failure to apply these patches leaves organizations susceptible to known exploits that hackers can take advantage of.
Social engineering attacks represent yet another major threat. By manipulating human psychology, cybercriminals trick individuals into revealing confidential information or gaining unauthorized access. Phishing scams are a prime example of social engineering techniques used by attackers.
Weak passwords also pose a considerable risk. Many users still rely on easily guessable passwords or reuse them across multiple accounts, making it easier for hackers to compromise their credentials and gain unauthorized access.
Additionally, insider threats cannot be overlooked – employees who abuse their privileges or have malicious intent can cause significant harm within an organization’s network infrastructure.
To mitigate these threats and vulnerabilities effectively, organizations must implement multi-layered security solutions such as firewalls, intrusion detection systems (IDS), encryption protocols for data transmission (HTTPS), regular vulnerability assessments/penetration testing exercises (VA/PT), and employee training programs on cybersecurity best practices.
By staying informed about emerging threats and investing in comprehensive security measures like those offered by Cisco Security Technologies discussed earlier in this guide – organizations can significantly reduce their risk exposure while maintaining a secure network environment.
Cisco Security Technologies (CBROPS)
Cisco Security Technologies (CBROPS) play a crucial role in safeguarding networks and data from cyber threats. With the increasing sophistication of attacks, it is essential for organizations to have robust security measures in place. Cisco offers a wide range of technologies that address different aspects of cybersecurity.
One such technology is Cisco Firepower, which combines next-generation firewall capabilities with advanced threat intelligence. This solution provides real-time visibility into network traffic and helps identify and mitigate potential threats before they can cause harm.
Another important technology offered by Cisco is Cisco Umbrella, a cloud-based secure internet gateway that protects users against malicious websites and prevents malware infections. It also provides granular control over web usage policies, helping organizations enforce compliance and maintain productivity.
Cisco Identity Services Engine (ISE) is yet another powerful security technology that enables network administrators to define and enforce access policies based on user identity. By ensuring only authorized users gain access to sensitive resources, ISE helps prevent unauthorized data breaches.
In addition to these technologies, Cisco also offers solutions like Advanced Malware Protection (AMP), which uses machine learning algorithms to detect and block known and unknown malware threats in real-time.
By leveraging these cutting-edge technologies from Cisco, organizations can strengthen their cybersecurity posture and protect their valuable assets from evolving threats. Implementing multiple layers of defense using these technologies will provide comprehensive protection against various attack vectors.
Stay tuned for the next section where we explore how implementing the Security Operations Process can further enhance your organization’s cybersecurity readiness!
Implementing the Security Operations Process
When it comes to cybersecurity, having a well-defined security operations process is crucial for detecting and responding to threats effectively. This process involves a series of steps that enable organizations to proactively identify vulnerabilities, analyze potential risks, and implement appropriate measures to mitigate them.
The first step in implementing the security operations process is establishing a solid foundation by defining clear objectives and goals. This includes understanding the organization’s risk appetite, identifying critical assets, and determining the desired level of protection.
Once the objectives are established, the next step is threat intelligence gathering. This involves collecting information about emerging threats and vulnerabilities from various sources such as industry reports, threat feeds, and internal data analysis. By staying informed about current trends in cyber attacks, organizations can better anticipate potential risks.
After gathering threat intelligence, it’s time to perform risk assessment. This involves evaluating existing controls and identifying any weaknesses or gaps that could be exploited by attackers. The goal here is to prioritize risks based on their impact on business operations so that resources can be allocated accordingly.
With the risks identified, organizations then move on to developing an incident response plan. This plan outlines specific actions to take when a security incident occurs – from initial detection all the way through containment, eradication,and recovery processes.
To ensure effective implementation of these plans, you will need robust tools like Cisco Security Technologies.
Cisco offers a range of solutions designed specifically for cybersecurity operations (CBROPS),such as firewalls, intrusion prevention systems (IPS),and advanced malware protection (AMP). These technologies work together seamlessly, collaborating with each other, to provide comprehensive network defense capabilities.
Finally, it’s essential to continuously monitor and review the effectiveness of the security operations process.
Regular assessments should be conductedto identify areasfor improvementand optimizeexistingmeasures.
Never underestimate how quickly threats evolve, and keeping up with advancements in security technologies is vital for maintaining a secure network environment.
Best Practices for Maintaining a Secure Network
When it comes to cybersecurity, maintaining a secure network is of utmost importance. With the ever-increasing number of cyber threats and vulnerabilities, organizations must be proactive and implement best practices to protect their networks. Here are some essential tips for maintaining a secure network.
1. Regularly Update Software and Firmware: Keeping your software and firmware up to date is crucial as it ensures that you have the latest security patches installed. Cybercriminals often exploit vulnerabilities in outdated systems, so make sure you regularly check for updates from Cisco or other vendors.
2. Implement Strong Password Policies: Weak passwords are like an open invitation to hackers. Ensure that all employees follow strong password guidelines, such as using a combination of upper and lowercase letters, numbers, and special characters. Encourage them to use unique passwords for each account and enable multi-factor authentication whenever possible.
3. Use Firewalls: Installing firewalls is an effective way to monitor incoming and outgoing network traffic while blocking unauthorized access attempts. Configure your firewall settings according to industry best practices or seek assistance from experts who can help set up custom rules based on your organization’s specific requirements.
4. Encrypt Data: Encryption adds an extra layer of protection by converting sensitive information into unreadable code until decrypted with the correct key or passphrase. Utilize encryption technologies such as SSL/TLS protocols when transmitting data over public networks or storing data on portable devices.
5. Conduct Regular Security Audits: Perform periodic security audits to identify any potential vulnerabilities in your network infrastructure or configurations that could be exploited by attackers. This will allow you to address these weaknesses proactively instead of waiting until after an incident has occurred.
6. Put in Place Intrusion Detection Systems (IDS) & Intrusion Prevention Systems (IPS): IDS monitors network traffic patterns looking out for suspicious activities or behaviors while IPS actively blocks any malicious activity detected in real-time before it reaches its intended target.
Protecting your network from cyber threats requires a multi-layered approach. By following these best
Exploring Career Opportunities in Cisco Cybersecurity Operations (200-201)
Are you passionate about protecting organizations from cyber threats? Do you have a knack for analyzing security incidents and implementing effective solutions? If so, a career in Cisco Cybersecurity Operations might be the perfect fit for you!
As technology continues to advance, the need for skilled cybersecurity professionals is on the rise. Companies of all sizes are seeking individuals who can defend their networks against ever-evolving threats. And with Cisco being a global leader in cybersecurity solutions, having expertise in Cisco Cybersecurity Operations (200-201) can open up exciting career prospects.
In this field, you could work as a Security Analyst, responsible for monitoring network activity and investigating any potential breaches or vulnerabilities. You may also specialize as an Incident Responder, actively responding to security incidents and mitigating their impact.
Another avenue is becoming a Security Engineer, where you would design and implement secure network infrastructures using Cisco’s cutting-edge technologies. Or perhaps you’re more inclined towards Security Consulting, offering guidance to clients on strengthening their cybersecurity posture.
With certifications like CCNA Cyber Ops (200-201 CBROPS), your skills will be recognized globally by employers looking to fill these crucial roles. These certifications validate your knowledge of fundamental security concepts and equip you with hands-on practical experience through labs and simulations.
To stand out in this competitive field, it’s important to keep yourself updated with the latest industry trends and advancements. Continuous learning through workshops, conferences, online courses or even pursuing advanced certifications such as CCNP Security can help solidify your expertise.
So if you’re ready to embark on an exciting journey into the world of Cisco Cybersecurity Operations (200-201) – filled with challenges but also immense opportunities – start building your skills today! Take advantage of resources like Dumps Arena that provide practice exams based on real exam questions – helping boost your confidence before tackling official certification exams.
Remember that every step forward counts towards shaping an impactful career in cybersecurity operations. Stay curious, be proactive, and embrace the constant evolution of this field.
Conclusion
In this beginner’s guide, we have demystified the fundamentals of Cisco Cybersecurity Operations (200-201). We explored the role of a Cybersecurity Operations Analyst and discussed common threats and vulnerabilities in cybersecurity. We also delved into various Cisco security technologies that can be implemented to protect networks from potential attacks.
By following best practices for maintaining a secure network, organizations can significantly reduce their risk of cyber threats. From regularly updating software and systems to implementing multi-factor authentication, every step taken towards strengthening cybersecurity is crucial.
For those interested in pursuing a career in Cisco Cybersecurity Operations (200-201), there are numerous opportunities waiting to be explored. With the increasing demand for skilled professionals in this field, obtaining certifications such as 200-201 CBROPS can greatly enhance one’s chances of landing rewarding job roles.
Remember, cybersecurity is an ever-evolving landscape, which means there will always be new challenges and advancements to keep up with. Staying updated on the latest trends and continuously expanding your knowledge will help you thrive in this dynamic field.
As technology continues to advance at lightning speed, it becomes increasingly important for individuals and organizations alike to prioritize cybersecurity operations. By understanding the fundamentals outlined in this guide and leveraging Cisco’s robust security technologies, you can build a strong defense against cyber threats.
So go ahead, embrace the world of Cisco Cybersecurity Operations Fundamentals (200-201)! Equip yourself with knowledge and skills needed to safeguard digital assets effectively – because when it comes to protecting sensitive information and ensuring business continuity, every effort counts!
To get started on your journey towards becoming a proficient cybersecurity professional or further enhancing your existing skillset, visit Dumps Arena today.
By Liam Kai
Liam Kai is an esteemed Essayist and Blogger with CertCertification, an online platform specializing in IT exam guidance, where I discovered my true calling. With a longstanding passion for technology and continuous skill development, crafting IT exam guides for renowned companies such as Amazon, Cisco, CompTIA, HP, Microsoft, Oracle, SAP, Salesforce, and VMware has become second nature to me.
Leave a Reply
Your email address will not be published. Required fields are marked *
|
__label__pos
| 0.715517 |
In the intricate realm of information technology, where the foundations of modern business are securely anchored, the Local Area Network (LAN) is a cornerstone of connectivity. It's the backbone that supports your organization's daily operations, ensuring data flows seamlessly and without interruption.
As seasoned experts in your field, you are well aware of the indispensable role that LAN networks play. But, optimizing and monitoring these networks can sometimes feel like navigating through uncharted waters. In this article, we're going to delve into the intricacies of LAN monitoring, equipping you with the knowledge and tools needed to elevate your LAN network's performance.
So, let's embark on this journey together, as we uncover the techniques and strategies required to master LAN monitoring and supercharge your LAN network's capabilities.
Understanding the Crucial Role of LAN Networks
Understanding the Crucial Role of LAN Networks
In today's technology landscape, Local Area Networks (LANs) play a vital role as the digital infrastructure that connects organizations, regardless of their size. Before we get into monitoring LAN networks, let’s delve into the importance of LANs and why it's crucial to understand their role.
1. Backbone of Connectivity: LANs are the beating heart of an organization's digital infrastructure. They facilitate seamless communication and data exchange among connected devices within a confined geographic area, such as an office building or campus. From computers and printers to servers and IP phones, LANs ensure these devices are interconnected and able to function harmoniously.
2. Efficient Data Sharing: Think of LANs as the express lanes for data transfer within your organization. Files, documents, emails, and more are transmitted at lightning speed, ensuring that productivity isn't hampered by sluggish data transfer rates.
3. Resource Sharing: LANs enable resource sharing, meaning multiple users can access shared resources like printers, scanners, and network-attached storage (NAS) devices. This not only saves costs but also streamlines workflows.
4. Security and Control: LANs provide a controlled environment for data and device access. With proper configuration and security protocols, IT administrators can safeguard sensitive information and grant or restrict access as needed.
5. Business Continuity: In the event of a wider network outage or internet disruption, LANs can continue to function, allowing essential internal operations to proceed uninterrupted. This resilience is vital for businesses that rely heavily on real-time data access.
Mastering LAN Monitoring: How to Improve LAN Network Performance
Personalized CTA - LAN
LAN vs. WAN Networks: What’s The Difference?
LAN vs. WAN Networks: What’s The Difference?
In the world of networking, understanding the distinctions between Local Area Networks (LANs) and Wide Area Networks (WANs) is fundamental. These two network types serve distinct purposes and have varying scopes.
Since we’ll be focusing on LAN monitoring in this article, let’s briefly explore In this section, the key differences between LANs and WANs, to understand the difference between monitoring them both.
Local Area Network (LAN):
Local Area Network (LAN):
1. Scope: LANs are designed to connect devices within a limited geographic area, typically within a single building, office, or campus. They are used to establish local connectivity for a specific group of users or devices.
2. Ownership: LANs are usually privately owned and operated by an organization, such as a business, school, or home network. They offer control and customization over network configurations.
3. Data Transfer: LANs provide high-speed data transfer rates, often in the range of 10 Mbps to 10 Gbps, allowing for fast communication between devices within the network.
4. Topologies: LANs can be set up using various network topologies, including Ethernet (wired) and Wi-Fi (wireless). Common LAN devices include computers, printers, servers, and switches.
5. Use Cases: LANs are ideal for local resource sharing, such as accessing shared files, printers, and internet connections within a building. They are commonly used in offices, homes, schools, and small to medium-sized businesses.
Wide Area Network (WAN):
Wide Area Network (WAN):
1. Scope: WANs cover a larger geographic area and are designed to connect LANs or networks that are located far apart. WANs can span cities, states, countries, or even continents.
2. Ownership: WANs often involve multiple organizations and service providers, and they may be a combination of public and private networks. They require more extensive infrastructure and coordination.
3. Data Transfer: WANs typically have lower data transfer rates compared to LANs due to the longer distances involved. Data transfer rates can range from a few Kbps to Gbps, depending on the technology used.
4. Topologies: WANs use various technologies for long-distance connectivity, including leased lines, optical fiber, satellite links, and the internet itself. Routers and switches are key components in WAN infrastructure.
5. Use Cases: WANs are used for connecting remote offices, branch locations, and data centers, enabling communication and data exchange across vast distances. The internet itself is a global WAN used for worldwide connectivity.
In summary, LANs are local networks used for connecting devices within a limited area, offering high-speed communication and control over the network. WANs, on the other hand, connect LANs or networks over long distances, often involving lower data transfer rates and a more extensive infrastructure. Both LANs and WANs serve critical roles in modern computing and communication, addressing the needs of local and global connectivity, respectively.
How to Monitor SD-WAN Networks: Mastering SD-WAN Network Monitoring
Learn how to monitor SD-WAN networks with Network Monitoring to get complete visibility over your SD-WAN service and identify SD-WAN issues.
Learn more right arrow hover right arrow
The Need for Effective LAN Monitoring
The Need for Effective LAN Monitoring
Now that we've established the indispensable role of LANs, let's dive into why monitoring these networks is not just a choice but a necessity for all businesses reliant on high-functioning LAN network performance:
• Proactive Issue Detection: LANs are dynamic, with devices connecting and disconnecting, and traffic patterns fluctuating. Effective LAN monitoring is your early warning system, helping you identify performance issues, bottlenecks, or security breaches before they escalate into critical problems.
• Optimization and Resource Allocation: Monitoring provides insights into network usage trends and resource allocation. With this data, you can optimize your LAN for peak performance, ensuring that mission-critical applications receive the necessary bandwidth.
• Security Threat Mitigation: Cyber threats are ever-evolving, and LANs are potential entry points for malicious actors. Monitoring helps detect suspicious activities, allowing you to take immediate action to protect your network and sensitive data.
• Cost Efficiency: By tracking resource utilization and identifying inefficiencies, LAN monitoring can help reduce unnecessary expenditures on network infrastructure and improve the return on investment.
• Compliance and Reporting: Many industries have regulatory requirements for data security and privacy. LAN monitoring aids in maintaining compliance by providing audit trails and reporting capabilities.
• Enhanced User Experience: LAN monitoring ensures that end-users experience minimal disruption and enjoy optimal network performance. This translates to higher productivity and customer satisfaction.
Now let’s explore the tools and techniques you can employ to become a LAN monitoring maestro, ensuring your LAN network operates at peak performance and security.
Introducing Obkio: Your Partner in Network Performance & LAN Monitoring!
Introducing Obkio: Your Partner in Network Performance & LAN Monitoring!
In the ever-evolving landscape of network management, having the right tools at your disposal is essential. Enter Obkio, a leading name in Network Performance Monitoring.
Obkio Network Performance Monitoring is a cutting-edge network monitoring solution designed to empower IT professionals, network administrators, and businesses of all sizes with the ability to monitor and optimize their network performance effectively. With a focus on simplicity, reliability, and actionable insights, Obkio stands as a trusted ally in ensuring that your network operates at its peak potential.
Free Trial - Banner - Generic
Why Choose Obkio?
1. Real-Time Visibility: Obkio provides real-time insights into your network's performance, allowing you to detect issues as they happen and take immediate action.
2. Proactive Monitoring: Say goodbye to reacting to network problems. Obkio offers proactive alerts that keep you informed of network anomalies, so you can address them before they impact your operations.
3. User-Friendly Interface: Whether you're an IT veteran or new to network management, Obkio's intuitive interface makes monitoring and troubleshooting accessible to all skill levels.
4. Detailed Analytics: Obkio's reporting and analytics tools offer in-depth insights into your network's behaviour, helping you make informed decisions to optimize performance and resource allocation.
5. Scalability: Obkio scales with your network. Whether you're managing a small LAN or a complex global WAN, Obkio can adapt to your needs.
Don't leave your network's performance to chance. Obkio is your partner in ensuring that your LAN and WAN networks operate seamlessly, efficiently, and securely.
Free Trial - Text CTA
Free Trial - Button - Generic
Getting Started with LAN Monitoring
Getting Started with LAN Monitoring
In the vast realm of network management, effective LAN monitoring is the cornerstone of ensuring your Local Area Network (LAN) runs like a well-oiled machine. This chapter will serve as your initiation into the world of LAN monitoring, covering essential aspects such as:
I. Defining LAN Monitoring
I. Defining LAN Monitoring
Before diving into the intricacies, it's crucial to establish a clear definition of LAN monitoring.
LAN monitoring is the process of observing and assessing the performance, health, and security of a Local Area Network. It involves the continuous tracking of various network metrics, such as bandwidth utilization, traffic patterns, device connectivity, and security parameters.
LAN monitoring tools and software are employed to collect and analyze data from network devices, allowing administrators to gain insights into network performance and detect anomalies or issues in real-time.
II. Setting Clear Network Performance Objectives for LAN Monitoring
II. Setting Clear Network Performance Objectives for LAN Monitoring
To embark on a successful LAN monitoring journey, it's essential to establish clear network performance objectives. These objectives serve as your guiding principles throughout the monitoring process:
• Define Measurable Goals: Determine what specific aspects of your LAN performance you want to monitor and improve. This could include metrics like latency, bandwidth utilization, or packet loss.
• Align with Business Needs: Ensure that your performance objectives align with your organization's broader goals and objectives. For example, if your business relies heavily on real-time video conferencing, your performance objectives may prioritize low latency and high bandwidth for this application.
• Establish Baselines: Before making improvements, establish baselines by measuring current network performance. This provides a reference point for evaluating the effectiveness of your monitoring and optimization efforts.
• Monitor Continuously: Network performance objectives are not static; they should evolve with your organization's changing needs and technological advancements. Continuously monitor and adjust your objectives to stay aligned with your business goals.
As you embark on your LAN monitoring journey, keep these fundamental principles in mind. They will serve as the foundation upon which you'll build a robust and effective LAN monitoring strategy to elevate your network's performance and reliability.
How to Deploy LAN Monitoring with NPM Tool: Step-by-Step Guide
How to Deploy LAN Monitoring with NPM Tool: Step-by-Step Guide
Now, let’s get into the real juicy stuff: deploying LAN monitoring!
When it comes to LAN monitoring, having a comprehensive and all-encompassing view of your network is paramount. It's not just about tracking isolated metrics or keeping an eye on specific devices; it's about gaining complete visibility into your network's performance from end to end.
This is where an end-to-end Network Performance Monitoring tool emerges as your most valuable asset.
An end-to-end Network Performance Monitoring tool allows you to monitor every aspect of your LAN network, from the moment data leaves its source to the instant it reaches its destination. This means tracking data packets as they traverse through switches, routers, firewalls, and various network devices. It's like having a virtual set of eyes on every byte of data within your LAN.
Step 1. Deploy A Network Performance & LAN Monitoring Tool
Step 1. Deploy A Network Performance & LAN Monitoring Tool
Next, you need to choose the right tool for you! We recommend an Active, Synthetic agent-based tool, like Obkio Network Performance Monitoring, to measure end-to-end network performance.
Mastering LAN Monitoring: LAN Network Monitor
• Obkio is SaaS-based: So it can be deployed in 10 minutes and doesn't require any special configurations. This means that you can start monitoring network performance quickly, and can easily scale the solution to your business needs.
• End-to-End: Obkio measures every end of your network, from your LAN to your WAN, using Network Monitoring Agents. You can deploy Agents in all your branch offices, remote locations, data centers, Clouds and over the Internet. It can also support all network types.
• Synthetic Monitoring: Obkio measures network performance using synthetic UDP traffic, which means there’s no packet capture required and no privacy concerns for users.
With complete visibility, you can quickly pinpoint the source of network problems, whether it's congestion at a specific switch, excessive latency between devices, or even security breaches within your LAN. This level of insight is indispensable for IT professionals and network administrators striving to maintain optimal LAN performance.
Free Trial - Text CTA
Free Trial - Button - Generic
Step 2. Deploy LAN Monitoring Agents
Step 2. Deploy LAN Monitoring Agents
To kickstart your network performance & LAN monitoring journey with Obkio, the first step is to deploy Network Monitoring Agents. These Agents play a vital role as they continuously gauge network performance at crucial network junctures.
They’re deployed in key network locations in your LAN and WAN and continuously exchange synthetic traffic to measure performance.
First, start by deploying Agents in your LAN (Local Area Network):
1. Determine Monitoring Locations:
Identify key points within your LAN where you want to monitor network performance. These locations should be strategically chosen based on factors such as network traffic, critical devices, or areas with known connectivity issues.
2. Select Agent Types:
Depending on your monitoring needs and the specific devices or platforms available in your LAN, choose the appropriate type of monitoring Agent. Some common Agent types include software-based Agents that can run on various operating systems (e.g., Windows, macOS, Linux) or hardware-based Agents specifically designed for network monitoring.
These Agents should be strategically placed in the specific office locations where you're experiencing connectivity challenges. Obkio offers a variety of Agent types, all equipped with the same powerful features. What's more, they are compatible with a range of operating systems, including MacOS, Windows, Linux, and more. This flexibility ensures you can seamlessly integrate them into your network environment.
Mastering LAN Monitoring: How to Improve LAN Network Performance
Step 3. Deploy WAN Monitoring Agents
Step 3. Deploy WAN Monitoring Agents
Next, you’ll need to deploy Monitoring Agents in your WAN as well.
WAN monitoring allows you to track network performance from your organization's premises (LAN) to remote locations, data centers, cloud services, or branch offices. It allows you to pinpoint the source of network connectivity problems, whether they originate within your LAN or occur due to external factors.
More specifically, WAN monitoring helps you detect and diagnose performance problems that may occur beyond your LAN, such as issues with internet service providers (ISPs), cloud service providers, or connections between branch offices. This facilitates faster troubleshooting and problem resolution.
For this you can use:
• Software or Hardware Agents: Deploy these Agents in remote offices, branch offices or data centers.
• Public Monitoring Agents: These are deployed over the Internet and managed by Obkio. They compare performance up to the Internet and quickly identify if network issues are global or specific to one destination. You can use an AWS or Google Cloud Agent.
Step 4. Monitor Network Devices for End-to-End LAN Monitoring
Step 4. Monitor Network Devices for End-to-End LAN Monitoring
Finally, you’ll need to begin monitoring your network devices using Obkio’s Network Device Monitoring feature.
Network devices are the backbone of any LAN infrastructure, and monitoring them provides critical insights into the health and performance of your LAN. Monitoring network devices allows you to keep tabs on the status and health of critical components such as switches, routers, firewalls, access points, and servers. This helps you identify hardware failures, overheating issues, or other problems that could disrupt network operations.
1. How to Monitor Network Devices:
Once you’ve installed your monitoring agents, you can then start monitoring your network devices using SNMP by adding the devices inside the Obkio App.
The device must support SNMP. Obkio supports all versions of SNMP (v1, v2c and v3) and of course, read-only access is needed. Learn more about the supported devices here.
Mastering LAN Monitoring: How to Improve LAN Network Performance
2. Which Network Devices Should You Monitor?
When it comes to LAN monitoring, it's crucial to monitor a range of network devices to ensure the overall health, performance, and security of your network. Here's a list of network devices that you should consider monitoring:
1. Switches: Switches are central components of LANs, and monitoring them helps track traffic patterns, identify congestion, and ensure that they are operating correctly.
2. Routers: Routers connect different network segments, including LANs and WANs. Monitoring routers helps detect routing issues, traffic congestion, and security threats.
3. Firewalls: Firewalls are vital for network security. Monitoring firewalls helps detect and respond to security threats, track rule violations, and ensure proper configuration.
4. Access Points (APs): APs are used for wireless LANs (Wi-Fi). Monitoring APs helps assess Wi-Fi signal strength, client connectivity, and potential interference.
5. Servers: Monitoring servers is essential to ensure they are available, responsive, and not experiencing resource bottlenecks. This includes both physical and virtual servers.
6. Load Balancers: Load balancers distribute network traffic across multiple servers or paths. Monitoring load balancers ensures even distribution and helps identify potential failures.
7. Network Attached Storage (NAS) Devices: NAS devices store and manage data. Monitoring them helps ensure data availability and assess storage utilization.
8. Printers and Print Servers: Monitoring printers and print servers ensures that printing services remain available and identifies issues like print job queues.
9. VoIP Gateways: If your LAN supports Voice over IP (VoIP), monitoring VoIP gateways is crucial to maintain call quality and detect call drops or latency issues.
10. Security Cameras and DVR/NVRs: If your LAN includes surveillance cameras, monitoring these devices ensures their functionality and storage capacity.
11. Intrusion Detection/Prevention Systems (IDS/IPS): IDS/IPS devices monitor network traffic for security threats. Monitoring them helps detect and respond to potential intrusions.
12. Uninterruptible Power Supplies (UPS): Monitoring UPS devices ensures that critical network equipment remains powered during outages and tracks battery health.
13. Environmental Sensors: Monitoring environmental conditions (temperature, humidity, etc.) within server rooms or data centers helps prevent equipment overheating or damage.
14. Network Printers and Scanners: These devices should be monitored to ensure they are online, available, and not experiencing issues.
15. Gateways and Modems: Monitoring these devices is essential if they are part of your network's connectivity to the internet or WAN.
16. Network-Attached Devices (IoT): If your LAN includes IoT devices, monitoring them helps ensure their proper functioning and security.
17. Managed Network Services: If you use third-party managed services, monitoring their performance ensures that you receive the service quality you expect.
18. Endpoints (Computers, Workstations): Monitoring individual computers or workstations can help identify performance issues, malware infections, or hardware problems.
19. UPS Bypass Switches: These devices should be monitored to ensure that they are functioning correctly and can be used in case of a UPS failure.
20. Network Print Servers: If your LAN includes networked printers, monitoring the print servers helps ensure print job processing and printer availability.
21. Storage Area Network (SAN) Devices: If your LAN includes SAN devices, monitoring them helps ensure data access, performance, and storage capacity.
The specific devices you should monitor depend on your network's size, complexity, and critical services. Comprehensive LAN monitoring involves monitoring a combination of these devices to maintain network reliability, security, and performance.
Step 5. Collect Continuous Network & LAN Monitoring Data
Step 5. Collect Continuous Network & LAN Monitoring Data
After deploying your Network Performance Monitoring tool, Obkio's Monitoring Agents initiate data exchange to assess network metrics such as jitter, packet loss, and latency. These metrics are then presented on Obkio's Network Response Time Graph for visualization and analysis.
Evaluating these network metrics enables you to promptly recognize and pinpoint any deterioration in network performance, which could serve as an early indicator of impending network problems. Obkio delivers real-time network performance updates every minute, ensuring that you stay informed and are alerted to any issues as they arise. Additionally, you have the capability to review historical performance data, facilitating the troubleshooting of past network issues.
Watch Demo - Banner - Generic
Watch Demo - Button
End-to-End LAN Monitoring: Prioritizing What to Monitor in Your LAN
End-to-End LAN Monitoring: Prioritizing What to Monitor in Your LAN
Effective LAN monitoring is about focusing your efforts on the most critical components and areas within your network.
In this section, we'll guide you through identifying and prioritizing the key elements that warrant close monitoring to ensure optimal network performance, security, and reliability. By understanding what matters most, you can streamline your monitoring efforts and make the most of your resources for a resilient and efficient LAN infrastructure. It’ll also help you with your Monitoring Agent setup from the previous section.
1. Core Network Devices:
• Switches: Core switches are central to network traffic routing. Monitoring them ensures efficient data transfer and identifies potential bottlenecks.
• Routers: Routers connect your LAN to other networks. Monitoring routers helps maintain routing efficiency and security.
2. Internet/WAN Connectivity:
• Monitor your LAN's connection to the internet or wide area network (WAN). It ensures that external connectivity is stable and any disruptions are promptly addressed.
3. Server Infrastructure:
• Monitor critical servers, including domain controllers, file servers, email servers, and database servers, to ensure their availability, performance, and resource utilization.
4. Firewalls and Security Appliances:
• Firewalls and security appliances are essential for protecting your LAN from external threats. Monitor them for intrusion attempts, rule violations, and overall security status.
5. Network Traffic and Bandwidth Usage:
• Continuously monitor network traffic to identify patterns, potential congestion, and bandwidth hogs. This helps in optimizing traffic flow.
6. Network Segments and VLANs:
• If you have VLANs (Virtual LANs) or segmented networks, monitor traffic and performance within these segments to ensure proper isolation and performance.
7. Wireless LAN (Wi-Fi):
• Monitor access points (APs), wireless client connectivity, signal strength, and channel interference to maintain a reliable Wi-Fi network.
8. Network Services and Applications:
• Keep an eye on critical network services and applications, such as email, VoIP, video conferencing, and web servers, to ensure uninterrupted service.
9. Security and Intrusion Detection Systems:
• Monitor security systems, including intrusion detection and prevention systems (IDS/IPS), to detect and respond to security threats proactively.
10. Network Health and Device Status:
• Regularly check the health and status of network devices, including switches, routers, and firewalls, to identify hardware failures or misconfigurations.
11. DNS and DHCP Services:
• Monitor DNS (Domain Name System) and DHCP (Dynamic Host Configuration Protocol) services to ensure that hostname resolution and IP address allocation are functioning correctly.
12. Redundancy and Failover Mechanisms:
• Monitor redundancy and failover configurations to ensure that backup paths or systems are ready to take over in case of device or link failures.
13. Remote Access and VPNs:
• Monitor remote access solutions and VPN (Virtual Private Network) connections for security, availability, and performance.
14. Endpoints and User Devices:
• Keep an eye on individual workstations, laptops, and user devices for performance issues, malware infections, and compliance with security policies.
15. Critical Network Paths:
• Monitor specific network paths that are critical for key applications or services to ensure low latency, minimal packet loss, and high availability.
16. VoIP Quality of Service (QoS):
• If your LAN supports VoIP, monitor QoS metrics to maintain high call quality and reduce call drops or latency.
17. Application Performance:
• Monitor the performance of critical applications to ensure that they are responsive and meeting service level agreements (SLAs).
18. Backups and Data Storage:
• Monitor backup systems and data storage solutions to verify that backups are successful and data is protected.
19. Logs and Event Data:
• Regularly review logs and event data from network devices and servers to detect anomalies, security incidents, and performance issues.
The most important parts to monitor will depend on your organization's specific goals, network architecture, and critical services. It's essential to establish clear monitoring objectives and tailor your monitoring strategy to align with your network's unique requirements and priorities.
Mastering LAN Monitoring: How to Improve LAN Network Performance
Personalized CTA - LAN
LAN Monitoring & WAN Monitoring: Why You Should Monitor Both
LAN Monitoring & WAN Monitoring: Why You Should Monitor Both
When it comes to networking, your LAN isn’t everything. A true end-to-end monitoring setup is one that monitors network performance from your LAN to your WAN.
Monitoring both your WAN (Wide Area Network) and LAN (Local Area Network) is essential for a holistic approach to network management and ensuring the overall health and performance of your organization's network infrastructure. Here’s why you should consider it:
1. End-to-End Network Visibility:
WAN monitoring allows you to track network performance from your organization's premises (LAN) to remote locations, data centers, cloud services, or branch offices. This end-to-end visibility is essential for understanding the complete network path and identifying potential bottlenecks or issues that may arise anywhere along the route.
2. Identifying Network Degradation:
WAN monitoring helps you detect and diagnose performance problems that may occur beyond your LAN, such as issues with internet service providers (ISPs), cloud service providers, or connections between branch offices. These external factors can have a significant impact on the user experience.
3. Optimizing Cloud and Internet Services:
Many organizations rely on cloud-based services and applications, making WAN monitoring crucial for ensuring the availability and performance of these services. Monitoring your WAN helps you assess the impact of cloud service providers on your network and identify any network connectivity issues or latency issues.
4. Troubleshooting Network Connectivity Problems:
WAN monitoring enables you to pinpoint the source of network connectivity problems, whether they originate within your LAN or occur due to external factors. This facilitates faster troubleshooting and problem resolution.
5. Enhancing User Experience:
A poor WAN connection can lead to slow application response times, video conferencing disruptions, and delays in accessing remote resources. Monitoring your WAN ensures a smooth and consistent user experience, regardless of the location of your users or resources.
6. Capacity Planning:
Monitoring both WAN and LAN traffic helps you make informed decisions regarding bandwidth allocation and capacity planning. This ensures that you have the necessary resources to support your network's growth and evolving demands.
7. Security and Compliance:
WAN monitoring is essential for detecting and mitigating security threats that may target external network entry points, such as remote access or VPN connections. It also helps organizations maintain compliance with security standards and regulations.
8. Comprehensive Network Analysis:
Combining WAN and LAN monitoring data provides a comprehensive view of your network's performance and health. This integrated approach allows you to correlate data, identify trends, and gain deeper insights into network behaviour.
In summary, monitoring both your WAN and LAN is critical for maintaining a reliable, high-performing, and secure network infrastructure. It enables you to proactively address issues, optimize network performance, and deliver a seamless experience to users, regardless of their location or the services they access.
What is Distributed Network Monitoring for SaaS and SD-WAN
Learn about distributed network monitoring and how it’s become necessary to monitor decentralized networks like SD-WAN and cloud-based (SaaS) applications.
Learn more right arrow hover right arrow
Key Metrics to Measure LAN Network Performance
Key Metrics to Measure LAN Network Performance
In the realm of LAN monitoring, knowledge is power. LAN monitoring is all about measuring vital metrics that serve as the pulse of your LAN's health and performance. By keeping a watchful eye on these key metrics, you gain the insights needed to ensure your LAN operates smoothly, efficiently, and securely.
When you’re using a Network Performance Monitoring tool like Obkio, your NPM tool will automatically monitor all these metrics for you. But it’s still important to understand their role in your LAN network performance.
I. Bandwidth Utilization: The Heartbeat of Your LAN Network Performance
I. Bandwidth Utilization: The Heartbeat of Your LAN Network Performance
Imagine bandwidth as the lifeblood of your LAN. It's the fuel that powers data transfer, and its efficient allocation is critical. Monitoring bandwidth utilization provides a real-time view of how much data is flowing through your network at any given moment. This insight is invaluable for:
• Identifying congestion points: High bandwidth utilization at specific times may indicate congestion, allowing you to take preemptive measures.
• Resource allocation: Ensure that critical applications receive the necessary bandwidth for optimal performance.
• Capacity planning: Use historical data to plan for future bandwidth needs and infrastructure upgrades.
II. Analyzing Network Traffic With A LAN Network Monitor
II. Analyzing Network Traffic With A LAN Network Monitor
Network traffic analysis goes beyond mere volume. It delves into the types of data traversing your LAN, its source and destination, and how it impacts your network's performance. By analyzing network traffic, you can:
• Detect anomalies: Unusual traffic patterns might be a sign of security threats or network issues.
• Optimize routing: Ensure data takes the most efficient path, reducing latency and improving network performance.
• Improve Quality of Service (QoS): Prioritize critical traffic, such as VoIP or video conferencing, to maintain high service quality.
III. Latency and Packet Loss: The Silent Disruptors of LAN Network Performance
III. Latency and Packet Loss: The Silent Disruptors of LAN Network Performance
Latency and packet loss are the silent disruptors that can undermine user experience and application performance. Monitoring these metrics is essential for:
• Real-time applications: High latency can lead to delays in real-time applications like video conferencing and online gaming.
• Troubleshooting: Detect and address issues that cause packet loss, which can result in data retransmissions and network inefficiency.
• Ensuring low-latency communication: Maintain responsive communication for critical applications and remote work scenarios.
Mastering LAN Monitoring: Measure LAN Network Performance
IV. More Network Metrics for Measuring LAN Network Performance
IV. More Network Metrics for Measuring LAN Network Performance
In addition to the key metrics mentioned earlier, there are several other network metrics and parameters that are valuable to monitor for comprehensive LAN monitoring. These metrics provide insights into various aspects of your network's performance, security, and health. Here are some additional network metrics to consider monitoring for LAN monitoring:
1. Error Rates: Monitor error rates on network interfaces to identify issues such as collision errors, frame errors, or CRC errors, which can indicate physical or data link layer problems.
2. Throughput: Measure the actual data transfer rate to determine how much data is being successfully transmitted over the network. Network throughput is extremely important for optimizing performance.
3. Network Utilization by Protocol: Track which network protocols (e.g., TCP, UDP, ICMP) are consuming the most bandwidth to identify traffic patterns and potential security threats.
4. DNS Resolution Time: Monitor DNS resolution time to measure the time it takes for DNS requests to resolve domain names to IP addresses, ensuring that DNS services are responsive.
5. DHCP Response Time: Track the time it takes for DHCP servers to assign IP addresses to devices on the network, helping optimize IP address allocation.
6. Wireless Signal Strength: For Wi-Fi networks, monitor signal strength, signal-to-noise ratio (SNR), and interference levels to ensure optimal wireless connectivity.
7. TCP/UDP Port Utilization: Identify which TCP/UDP ports are in use and their associated applications to manage application-specific traffic.
8. Virtual LAN (VLAN) Tagging: Monitor VLAN tagging to ensure proper isolation and traffic management within VLANs.
9. VoIP Metrics: For VoIP (Voice over IP) networks, track metrics like jitter, MOS (Mean Opinion Score), and call quality to ensure clear and reliable voice communication.
10. IPv6 Monitoring: If your network uses IPv6, monitor IPv6 traffic, addresses, and address allocation to ensure IPv6 connectivity and security.
11. Device Uptime: Track the uptime of critical network devices to identify devices that frequently experience downtime or instability.
12. Resource Utilization: Monitor CPU, memory, and disk utilization on network devices and servers to optimize resource allocation and prevent resource bottlenecks.
13. Network Latency by Destination: Monitor latency to specific destinations or external services to ensure consistent connectivity and performance for critical services.
14. Quality of Service (QoS) Metrics: Track QoS parameters to ensure that high-priority traffic receives the necessary priority and resources.
By focusing on these key metrics, you equip yourself with the insights needed to proactively manage and optimize your LAN. These metrics serve as the vital signs of your LAN's health, allowing you to address issues promptly and ensure a network that thrives in today's demanding digital landscape.
19 Network Metrics: How to Measure Network Performance
Learn how to measure network performance with key network metrics like throughput, latency, packet loss, jitter, packet reordering and more!
Learn more right arrow hover right arrow
Identifying LAN Network Issues with LAN Monitoring
Identifying LAN Network Issues with LAN Monitoring
Effective LAN monitoring goes beyond measuring metrics; it's about recognizing network problems and bottlenecks that can impact network performance, security, and scalability. So let’s dive deeper into into common LAN network issues, how to spot them, and the telltale signs of subpar LAN performance.
Identifying these anomalies, with the help of tools like Obkio's Network Performance Monitoring tool, is the first step toward proactive resolution and network optimization.
I. Common LAN Network Issues
I. Common LAN Network Issues
Understanding the common issues that can plague LAN networks is crucial for effective identification and resolution. Some prevalent LAN network issues include:
1. Network Congestion: Network congestion is caused by excessive network traffic leading to slow data transfer and performance degradation. Monitor bandwidth utilization and network traffic. Consistently high traffic levels during peak hours or sudden drops in network performance can signal congestion issues.
2. Packet Loss: Packet loss occurs when data packets fail to reach their destination across a network, resulting in retransmissions, increased latency, and decreased throughput. You can use a network monitoring tool to detect packet loss using packet capture or synthetic monitoring.
3. Latency: Latency issues cause a delay in data transmission, impacting real-time applications like VoIP or online gaming, leading to poor user experiences. To identify latency issues, measure round-trip time (RTT) between network devices, since high latency is evident in delays between data request and response.
4. Jitter: Jitter occurs due to a variability in packet arrival times, causing disruptions in audio and video communications. Use Obkio’s NPM tool to monitor packet arrival times using tools like jitter buffers. Inconsistent inter-packet arrival times indicate jitter issues.
5. Network Loops: Misconfigurations or faulty cabling leading to broadcast storms and network instability. To identify network loops, observe excessive broadcast traffic or network instability. Spanning Tree Protocol (STP) logs or tools like Wireshark can help detect loops.
6. Broadcast/Multicast Storms: Excessive broadcast or multicast traffic flooding the network. Network monitoring tools can detect unusual spikes in broadcast or multicast traffic. Unicast traffic patterns should be typical; any deviation suggests a storm.
7. Security Breaches: Unauthorized access, malware, or other security threats compromising network integrity. Monitor security logs and set up intrusion detection systems (IDS) and intrusion prevention systems (IPS) to detect suspicious activities or unauthorized access attempts.
8. DNS Issues: DNS issues leading to slow or unreliable domain name resolution affect Internet access and can lead to connectivity problems. Use NPM tools to monitor DNS response times, as consistently long response times or DNS lookup failures indicate DNS issues.
9. DHCP Problems: DHCP issues cause IP address conflicts, lease issues, or DHCP server failures that disrupt network connectivity. To identify, and monitor DHCP logs for error messages or examine IP address allocation patterns for conflicts.
10. Hardware Failures: Malfunctioning network devices, such as switches or routers, causing downtime. Network monitoring tools provide alerts and status updates for hardware failures or unresponsive devices.
11. Configuration Errors: Misconfigured network settings leading to connectivity problems or security vulnerabilities. Regularly audit network configurations to check for errors or inconsistencies.
II. How to Identify LAN Network Issues
II. How to Identify LAN Network Issues
Identifying LAN network issues requires a proactive approach and a combination of tools, techniques, and vigilance. Tools like Obkio's Network Performance Monitoring tool play a crucial role in this process. Here's how to effectively identify network issues:
• Network Monitoring Tools: Utilize network monitoring tools, such as Obkio's solution, to continuously collect data on network performance, bandwidth utilization, and device status.
• Performance Baselines: Establish performance baselines using Obkio's monitoring data to compare against current network behaviour and detect deviations.
• Alerts and Thresholds: Configure alerts and thresholds within Obkio's tool to receive notifications when specific metrics exceed predefined limits.
• Synthetic Monitoring: Utilize Obkio synthetic monitoring feature to continuously measure network performance using synthetic traffic - with no packet capture required. Synthetic monitoring allows you to proactively identify network issues before they affect end users or application performance.
• Log Analysis: Review log files from network devices and servers, supplementing Obkio's data, to identify error messages or security events.
• Regular Audits: Conduct regular network audits using Obkio's tool to check for misconfigurations, security vulnerabilities, and compliance issues.
• End-User Feedback: Solicit feedback from end-users, leveraging Obkio's insights, to identify performance issues and connectivity problems they may encounter.
How to Identify Network Problems & Diagnose Network Issues
Learn how to identify network issues by looking at common problems, causes, consequences and solutions.
Learn more right arrow hover right arrow
III. Signs of Poor LAN Network Performance
III. Signs of Poor LAN Network Performance
Recognizing the signs of poor LAN performance, with the assistance of Obkio's Network Performance Monitoring tool, is essential for early detection. These signs may include:
1. Slow Data Transfer: Sluggish file transfers, downloads, or uploads, as observed and monitored by Obkio's tool.
2. Application Lag: Delayed or unresponsive applications, especially in real-time scenarios, tracked through Obkio's monitoring.
3. Frequent Disconnects: Devices frequently lose connectivity to the network, as recorded by Obkio's data.
4. High Packet Loss: Increased packet loss rates were observed in network monitoring data provided by Obkio.
5. Increased Latency: Delays in data transmission, causing noticeable delays in communication, as measured by Obkio's tool.
6. Excessive Jitter: Inconsistent audio or video quality during VoIP calls or video conferencing, monitored through Obkio.
7. Unexplained Downtime: Unexpected network outages or devices going offline, tracked using Obkio's network performance data.
8. Spikes in Bandwidth Utilization: Sudden spikes in bandwidth usage causing congestion, detected through Obkio's monitoring capabilities.
By being vigilant and proactive in monitoring and analyzing your LAN network's behaviour, supplemented by Obkio's Network Performance Monitoring tool, you can swiftly identify anomalies, diagnose issues, and implement necessary remedies to maintain optimal LAN performance and reliability.
Mastering LAN Monitoring: Measure LAN Network Performance
Personalized CTA - LAN
Exploring More LAN Monitoring Tools & Techniques
Exploring More LAN Monitoring Tools & Techniques
In the world of LAN monitoring, having the right tools at your disposal is essential to navigate the complex web of network performance. We already spoke about using Network Performance Monitoring tools for LAN monitoring, since they’re the most comprehensive type of tool for identifying and troubleshooting LAN performance.
But, there are other techniques and tools. So let’s explore the tools and software necessary to excel in LAN monitoring, as well as helping you make informed decisions when selecting the right monitoring solution for your specific network needs.
I. Essential LAN Monitoring Tools and Software
I. Essential LAN Monitoring Tools and Software
When it comes to LAN monitoring, a plethora of tools and software options are available to suit various network environments and objectives. Here are some of the essential tools and software categories you should be familiar with:
• Network Monitoring Software: Comprehensive Network Monitoring tools, like Obkio, provide a range of network monitoring features, including real-time performance data, alerts, and historical analysis.
• Packet Analyzers: Tools such as Wireshark offer deep packet inspection capabilities, allowing you to scrutinize network traffic at a granular level to identify and troubleshoot issues.
• Flow Analyzers: Flow-based monitoring tools like NetFlow or sFlow collectors help you understand traffic patterns, bandwidth utilization, and application usage across your network.
• SNMP (Simple Network Management Protocol): SNMP monitoring tools allow you to collect data from network devices and create a centralized management system to monitor device health, performance, and status.
• Cloud-Based Monitoring Solutions: Cloud-based options like Obkio, which offer ease of deployment and scalability, are gaining popularity for their ability to monitor LANs, WANs, and cloud-based resources.
• Security Information and Event Management (SIEM) Tools: SIEM solutions like Splunk or IBM QRadar can provide both network performance and security monitoring, making them valuable for detecting and mitigating security threats.
II. Choosing the Right LAN Monitoring Solution for Your Network
II. Choosing the Right LAN Monitoring Solution for Your Network
Selecting the right LAN monitoring solution for your network is a critical decision that can significantly impact your network's performance, security, and management. Consider the following factors when making your choice:
1. Network Size and Complexity: Assess the size and complexity of your LAN. Larger networks with numerous devices and subnets may require more robust and scalable monitoring solutions.
2. Budget Constraints: Consider your budget limitations. Some monitoring tools are open-source and cost-effective, while others come with subscription fees. Ensure that the chosen solution aligns with your financial resources.
3. Scalability: If your network is expected to grow in the future, choose a monitoring solution that can scale with your needs without major disruptions.
4. Ease of Use: Evaluate the user-friendliness of the monitoring tool. A solution that offers an intuitive interface and easy setup can save time and reduce the learning curve for your team.
5. Required Features: Identify the specific features and functionalities you need, such as real-time alerts, historical data analysis, or integration with other systems like SIEM or cloud platforms.
6. Vendor Reputation and Support: Research the reputation of the monitoring tool vendor and the quality of their customer support services.
By carefully considering these factors and exploring the array of LAN monitoring tools and software available, you can make an informed decision that aligns with your network's unique requirements, ensuring effective monitoring and management.
Working Session - Banner
How to Improve LAN Network Performance: LAN Monitoring Best Practices
How to Improve LAN Network Performance: LAN Monitoring Best Practices
As we approach the conclusion of this article, we've journeyed through the intricacies of LAN network management, exploring strategies to improve performance, enhance security, and identify anomalies. Before letting you off, we'd like to leave you with a final set of tips and insights that can further empower you in your mission to master LAN monitoring and optimization.
Improving LAN (Local Area Network) network performance is essential for ensuring that your organization's internal communication, data transfer, and access to resources remain fast and efficient.
1. Monitor Network Performance:
Implement a robust network monitoring solution to continuously track and analyze network performance metrics. This allows you to identify issues promptly and make data-driven optimizations.
2. Set Up Effective Alerting Systems:
Identify critical network metrics, define clear thresholds, and establish escalation procedures. Prioritize alerts based on severity and automate notifications for fast and effective network troubleshooting.
3. Regularly Update Hardware:
Ensure that network devices such as switches, routers, and access points are up to date. Newer hardware often offers better performance and security features.
4. Optimize Network Design:
Review and optimize your network's physical and logical design to minimize bottlenecks and reduce latency. Consider factors like switch placement, VLANs, and network segmentation.
5. Upgrade Network Switches:
Invest in managed switches that offer Quality of Service (QoS) features to prioritize critical traffic, such as VoIP or video conferencing.
6. Implement VLANs:
Use Virtual LANs (VLANs) to segregate network traffic, improving security and reducing broadcast traffic that can cause congestion.
7. Update Firmware and Software:
Regularly update the firmware and software of network devices and servers to ensure they are patched against security vulnerabilities and perform optimally.
8. Optimize DNS and DHCP:
Ensure efficient domain name resolution (DNS) and IP address assignment (DHCP). Use local caching DNS servers and properly configure DHCP lease times.
9. Traffic Prioritization:
Implement QoS policies to prioritize critical applications and services, ensuring they receive adequate bandwidth and low latency.
10. Optimize Wireless LAN (Wi-Fi):
If using Wi-Fi, optimize access point placement, use 5 GHz frequency bands, and minimize interference sources to enhance wireless performance.
11. Load Balancing:
Implement load balancing solutions to evenly distribute network traffic across multiple paths or servers, reducing congestion and enhancing redundancy.
12. Firewall Optimization:
Configure firewalls to allow necessary traffic while blocking unwanted or malicious data packets, optimizing both security and performance.
13. Content Delivery Network (CDN):
Use CDNs to cache and serve static content closer to end-users, reducing latency and accelerating content delivery.
By implementing these tips and continually monitoring and optimizing your LAN network, you can ensure that it operates at peak performance, providing a reliable and efficient infrastructure for your organization's daily operations.
The Power of LAN Monitoring: A Recap
The Power of LAN Monitoring: A Recap
LAN monitoring serves as the digital pulse of modern organizations, ensuring that data flows seamlessly, applications run smoothly, and communication remains uninterrupted. Throughout this article, we've delved into the critical role LAN networks play in the technology ecosystem, from connecting devices to enabling mission-critical operations. We've explored the need for effective LAN monitoring to maintain network health, optimize performance, and enhance security.
By understanding the common challenges that LAN networks face, identifying network anomalies, and implementing best practices, you empower your organization to navigate the complexities of modern IT infrastructure with confidence. LAN monitoring isn't just a proactive approach; it's a strategic investment in the reliability and efficiency of your network.
Taking Your LAN Network Performance to the Next Level with Obkio's Network Performance Monitoring Tool
Taking Your LAN Network Performance to the Next Level with Obkio's Network Performance Monitoring Tool
To supercharge your LAN network management, consider integrating Obkio's Network Performance Monitoring tool into your arsenal. This powerful tool offers a comprehensive suite of features designed to provide complete visibility into your LAN network's performance.
Free Trial - Banner - Generic
• Real-time Monitoring: Obkio provides real-time monitoring of crucial network metrics, including latency, packet loss, and jitter, ensuring you have immediate visibility into network performance.
• Proactive Issue Detection: The tool detects network problems before they impact users, allowing for proactive measures to minimize disruptions and downtime.
• User-Friendly Interface: Obkio offers an intuitive and user-friendly interface that makes it easy for both IT professionals and network administrators to monitor, analyze, and optimize LAN networks.
• Comprehensive Metrics: It measures a wide range of network performance metrics, providing a comprehensive view of your LAN network's health and performance.
• Visualizations: Obkio's Network Response Time Graph provides clear visualizations that make it easy to pinpoint issues and identify performance trends.
• Complete Network Visibility: With Obkio, you gain complete visibility into your LAN network, enabling you to make informed decisions and take necessary actions.
By leveraging Obkio's Network Performance Monitoring tool, you not only gain deeper insights into your LAN network but also equip yourself with the means to elevate network performance, ensure seamless connectivity, and strengthen your organization's technological backbone.
So, take the leap, harness the power of LAN monitoring, and elevate your LAN network to new heights with Obkio. Your organization's digital future depends on it.
Free Trial - Text CTA
Free Trial - Button - Generic
These might interest you
How to Troubleshoot Network Connectivity Issues: The Great Network Escape
play icon
Learn how Obkio masters LAN monitoring - Watch now!
|
__label__pos
| 0.977745 |
remove default theme, add retweet count and favorites count
[rainbowstream.git] / rainbowstream / rainbow.py
1 """
2 Colorful user's timeline stream
3 """
4 import os
5 import os.path
6 import sys
7 import signal
8 import argparse
9 import time
10 import threading
11 import requests
12 import webbrowser
13
14 from twitter.stream import TwitterStream, Timeout, HeartbeatTimeout, Hangup
15 from twitter.api import *
16 from twitter.oauth import OAuth, read_token_file
17 from twitter.oauth_dance import oauth_dance
18 from twitter.util import printNicely
19
20 from .draw import *
21 from .colors import *
22 from .config import *
23 from .consumer import *
24 from .interactive import *
25 from .c_image import *
26 from .py3patch import *
27
28 # Global values
29 g = {}
30
31 # Lock for streams
32 StreamLock = threading.Lock()
33
34 # Commands
35 cmdset = [
36 'switch',
37 'trend',
38 'home',
39 'view',
40 'mentions',
41 't',
42 'rt',
43 'quote',
44 'allrt',
45 'fav',
46 'rep',
47 'del',
48 'ufav',
49 's',
50 'mes',
51 'show',
52 'open',
53 'ls',
54 'inbox',
55 'sent',
56 'trash',
57 'whois',
58 'fl',
59 'ufl',
60 'mute',
61 'unmute',
62 'muting',
63 'block',
64 'unblock',
65 'report',
66 'list',
67 'cal',
68 'config',
69 'theme',
70 'h',
71 'p',
72 'r',
73 'c',
74 'q'
75 ]
76
77
78 def parse_arguments():
79 """
80 Parse the arguments
81 """
82 parser = argparse.ArgumentParser(description=__doc__ or "")
83 parser.add_argument(
84 '-to',
85 '--timeout',
86 help='Timeout for the stream (seconds).')
87 parser.add_argument(
88 '-ht',
89 '--heartbeat-timeout',
90 help='Set heartbeat timeout.',
91 default=90)
92 parser.add_argument(
93 '-nb',
94 '--no-block',
95 action='store_true',
96 help='Set stream to non-blocking.')
97 parser.add_argument(
98 '-tt',
99 '--track-keywords',
100 help='Search the stream for specific text.')
101 parser.add_argument(
102 '-fil',
103 '--filter',
104 help='Filter specific screen_name.')
105 parser.add_argument(
106 '-ig',
107 '--ignore',
108 help='Ignore specific screen_name.')
109 parser.add_argument(
110 '-iot',
111 '--image-on-term',
112 action='store_true',
113 help='Display all image on terminal.')
114 return parser.parse_args()
115
116
117 def authen():
118 """
119 Authenticate with Twitter OAuth
120 """
121 # When using rainbow stream you must authorize.
122 twitter_credential = os.environ.get(
123 'HOME',
124 os.environ.get(
125 'USERPROFILE',
126 '')) + os.sep + '.rainbow_oauth'
127 if not os.path.exists(twitter_credential):
128 oauth_dance("Rainbow Stream",
129 CONSUMER_KEY,
130 CONSUMER_SECRET,
131 twitter_credential)
132 oauth_token, oauth_token_secret = read_token_file(twitter_credential)
133 return OAuth(
134 oauth_token,
135 oauth_token_secret,
136 CONSUMER_KEY,
137 CONSUMER_SECRET)
138
139
140 def build_mute_dict(dict_data=False):
141 """
142 Build muting list
143 """
144 t = Twitter(auth=authen())
145 # Init cursor
146 next_cursor = -1
147 screen_name_list = []
148 name_list = []
149 # Cursor loop
150 while next_cursor != 0:
151 list = t.mutes.users.list(
152 screen_name=g['original_name'],
153 cursor=next_cursor,
154 skip_status=True,
155 include_entities=False,
156 )
157 screen_name_list += ['@' + u['screen_name'] for u in list['users']]
158 name_list += [u['name'] for u in list['users']]
159 next_cursor = list['next_cursor']
160 # Return dict or list
161 if dict_data:
162 return dict(zip(screen_name_list, name_list))
163 else:
164 return screen_name_list
165
166
167 def init(args):
168 """
169 Init function
170 """
171 # Handle Ctrl C
172 ctrl_c_handler = lambda signum, frame: quit()
173 signal.signal(signal.SIGINT, ctrl_c_handler)
174 # Get name
175 t = Twitter(auth=authen())
176 name = '@' + t.account.verify_credentials()['screen_name']
177 if not get_config('PREFIX'):
178 set_config('PREFIX', name)
179 g['original_name'] = name[1:]
180 g['decorated_name'] = lambda x: color_func(
181 c['DECORATED_NAME'])('[' + x + ']: ')
182 # Theme init
183 files = os.listdir(os.path.dirname(__file__) + '/colorset')
184 themes = [f.split('.')[0] for f in files if f.split('.')[-1] == 'json']
185 g['themes'] = themes
186 # Startup cmd
187 g['previous_cmd'] = ''
188 # Semaphore init
189 c['lock'] = False
190 c['pause'] = False
191 # Init tweet dict and message dict
192 c['tweet_dict'] = []
193 c['message_dict'] = []
194 # Image on term
195 c['IMAGE_ON_TERM'] = args.image_on_term
196 set_config('IMAGE_ON_TERM', str(c['IMAGE_ON_TERM']))
197 # Mute dict
198 c['IGNORE_LIST'] += build_mute_dict()
199
200
201 def switch():
202 """
203 Switch stream
204 """
205 try:
206 target = g['stuff'].split()[0]
207 # Filter and ignore
208 args = parse_arguments()
209 try:
210 if g['stuff'].split()[-1] == '-f':
211 guide = 'To ignore an option, just hit Enter key.'
212 printNicely(light_magenta(guide))
213 only = raw_input('Only nicks [Ex: @xxx,@yy]: ')
214 ignore = raw_input('Ignore nicks [Ex: @xxx,@yy]: ')
215 args.filter = filter(None, only.split(','))
216 args.ignore = filter(None, ignore.split(','))
217 elif g['stuff'].split()[-1] == '-d':
218 args.filter = c['ONLY_LIST']
219 args.ignore = c['IGNORE_LIST']
220 except:
221 printNicely(red('Sorry, wrong format.'))
222 return
223 # Public stream
224 if target == 'public':
225 keyword = g['stuff'].split()[1]
226 if keyword[0] == '#':
227 keyword = keyword[1:]
228 # Kill old thread
229 g['stream_stop'] = True
230 args.track_keywords = keyword
231 # Start new thread
232 th = threading.Thread(
233 target=stream,
234 args=(
235 c['PUBLIC_DOMAIN'],
236 args))
237 th.daemon = True
238 th.start()
239 # Personal stream
240 elif target == 'mine':
241 # Kill old thread
242 g['stream_stop'] = True
243 # Start new thread
244 th = threading.Thread(
245 target=stream,
246 args=(
247 c['USER_DOMAIN'],
248 args,
249 g['original_name']))
250 th.daemon = True
251 th.start()
252 printNicely('')
253 if args.filter:
254 printNicely(cyan('Only: ' + str(args.filter)))
255 if args.ignore:
256 printNicely(red('Ignore: ' + str(args.ignore)))
257 printNicely('')
258 except:
259 printNicely(red('Sorry I can\'t understand.'))
260
261
262 def trend():
263 """
264 Trend
265 """
266 t = Twitter(auth=authen())
267 # Get country and town
268 try:
269 country = g['stuff'].split()[0]
270 except:
271 country = ''
272 try:
273 town = g['stuff'].split()[1]
274 except:
275 town = ''
276 avail = t.trends.available()
277 # World wide
278 if not country:
279 trends = t.trends.place(_id=1)[0]['trends']
280 print_trends(trends)
281 else:
282 for location in avail:
283 # Search for country and Town
284 if town:
285 if location['countryCode'] == country \
286 and location['placeType']['name'] == 'Town' \
287 and location['name'] == town:
288 trends = t.trends.place(_id=location['woeid'])[0]['trends']
289 print_trends(trends)
290 # Search for country only
291 else:
292 if location['countryCode'] == country \
293 and location['placeType']['name'] == 'Country':
294 trends = t.trends.place(_id=location['woeid'])[0]['trends']
295 print_trends(trends)
296
297
298 def home():
299 """
300 Home
301 """
302 t = Twitter(auth=authen())
303 num = c['HOME_TWEET_NUM']
304 if g['stuff'].isdigit():
305 num = int(g['stuff'])
306 for tweet in reversed(t.statuses.home_timeline(count=num)):
307 draw(t=tweet)
308 printNicely('')
309
310
311 def view():
312 """
313 Friend view
314 """
315 t = Twitter(auth=authen())
316 user = g['stuff'].split()[0]
317 if user[0] == '@':
318 try:
319 num = int(g['stuff'].split()[1])
320 except:
321 num = c['HOME_TWEET_NUM']
322 for tweet in reversed(t.statuses.user_timeline(count=num, screen_name=user[1:])):
323 draw(t=tweet)
324 printNicely('')
325 else:
326 printNicely(red('A name should begin with a \'@\''))
327
328
329 def mentions():
330 """
331 Mentions timeline
332 """
333 t = Twitter(auth=authen())
334 num = c['HOME_TWEET_NUM']
335 if g['stuff'].isdigit():
336 num = int(g['stuff'])
337 for tweet in reversed(t.statuses.mentions_timeline(count=num)):
338 draw(t=tweet)
339 printNicely('')
340
341
342 def tweet():
343 """
344 Tweet
345 """
346 t = Twitter(auth=authen())
347 t.statuses.update(status=g['stuff'])
348
349
350 def retweet():
351 """
352 ReTweet
353 """
354 t = Twitter(auth=authen())
355 try:
356 id = int(g['stuff'].split()[0])
357 except:
358 printNicely(red('Sorry I can\'t understand.'))
359 return
360 tid = c['tweet_dict'][id]
361 t.statuses.retweet(id=tid, include_entities=False, trim_user=True)
362
363
364 def quote():
365 """
366 Quote a tweet
367 """
368 t = Twitter(auth=authen())
369 try:
370 id = int(g['stuff'].split()[0])
371 except:
372 printNicely(red('Sorry I can\'t understand.'))
373 return
374 tid = c['tweet_dict'][id]
375 tweet = t.statuses.show(id=tid)
376 screen_name = tweet['user']['screen_name']
377 text = tweet['text']
378 quote = '\"@' + screen_name + ': ' + text + '\"'
379 quote = quote.encode('utf8')
380 notice = light_magenta('Compose mode ')
381 notice += light_yellow('(Enter nothing will cancel the quote)')
382 notice += light_magenta(':')
383 printNicely(notice)
384 extra = raw_input(quote)
385 if extra:
386 t.statuses.update(status=quote + extra)
387 else:
388 printNicely(light_magenta('No text added.'))
389
390
391 def allretweet():
392 """
393 List all retweet
394 """
395 t = Twitter(auth=authen())
396 # Get rainbow id
397 try:
398 id = int(g['stuff'].split()[0])
399 except:
400 printNicely(red('Sorry I can\'t understand.'))
401 return
402 tid = c['tweet_dict'][id]
403 # Get display num if exist
404 try:
405 num = int(g['stuff'].split()[1])
406 except:
407 num = c['RETWEETS_SHOW_NUM']
408 # Get result and display
409 rt_ary = t.statuses.retweets(id=tid, count=num)
410 if not rt_ary:
411 printNicely(magenta('This tweet has no retweet.'))
412 return
413 for tweet in reversed(rt_ary):
414 draw(t=tweet)
415 printNicely('')
416
417
418 def favorite():
419 """
420 Favorite
421 """
422 t = Twitter(auth=authen())
423 try:
424 id = int(g['stuff'].split()[0])
425 except:
426 printNicely(red('Sorry I can\'t understand.'))
427 return
428 tid = c['tweet_dict'][id]
429 t.favorites.create(_id=tid, include_entities=False)
430 printNicely(green('Favorited.'))
431 draw(t.statuses.show(id=tid))
432 printNicely('')
433
434
435 def reply():
436 """
437 Reply
438 """
439 t = Twitter(auth=authen())
440 try:
441 id = int(g['stuff'].split()[0])
442 except:
443 printNicely(red('Sorry I can\'t understand.'))
444 return
445 tid = c['tweet_dict'][id]
446 user = t.statuses.show(id=tid)['user']['screen_name']
447 status = ' '.join(g['stuff'].split()[1:])
448 status = '@' + user + ' ' + unc(status)
449 t.statuses.update(status=status, in_reply_to_status_id=tid)
450
451
452 def delete():
453 """
454 Delete
455 """
456 t = Twitter(auth=authen())
457 try:
458 id = int(g['stuff'].split()[0])
459 except:
460 printNicely(red('Sorry I can\'t understand.'))
461 return
462 tid = c['tweet_dict'][id]
463 t.statuses.destroy(id=tid)
464 printNicely(green('Okay it\'s gone.'))
465
466
467 def unfavorite():
468 """
469 Unfavorite
470 """
471 t = Twitter(auth=authen())
472 try:
473 id = int(g['stuff'].split()[0])
474 except:
475 printNicely(red('Sorry I can\'t understand.'))
476 return
477 tid = c['tweet_dict'][id]
478 t.favorites.destroy(_id=tid)
479 printNicely(green('Okay it\'s unfavorited.'))
480 draw(t.statuses.show(id=tid))
481 printNicely('')
482
483
484 def search():
485 """
486 Search
487 """
488 t = Twitter(auth=authen())
489 g['stuff'] = g['stuff'].strip()
490 rel = t.search.tweets(q=g['stuff'])['statuses']
491 if rel:
492 printNicely('Newest tweets:')
493 for i in reversed(xrange(c['SEARCH_MAX_RECORD'])):
494 draw(t=rel[i],
495 keyword=g['stuff'])
496 printNicely('')
497 else:
498 printNicely(magenta('I\'m afraid there is no result'))
499
500
501 def message():
502 """
503 Send a direct message
504 """
505 t = Twitter(auth=authen())
506 user = g['stuff'].split()[0]
507 if user[0].startswith('@'):
508 try:
509 content = g['stuff'].split()[1]
510 except:
511 printNicely(red('Sorry I can\'t understand.'))
512 t.direct_messages.new(
513 screen_name=user[1:],
514 text=content
515 )
516 printNicely(green('Message sent.'))
517 else:
518 printNicely(red('A name should begin with a \'@\''))
519
520
521 def show():
522 """
523 Show image
524 """
525 t = Twitter(auth=authen())
526 try:
527 target = g['stuff'].split()[0]
528 if target != 'image':
529 return
530 id = int(g['stuff'].split()[1])
531 tid = c['tweet_dict'][id]
532 tweet = t.statuses.show(id=tid)
533 media = tweet['entities']['media']
534 for m in media:
535 res = requests.get(m['media_url'])
536 img = Image.open(BytesIO(res.content))
537 img.show()
538 except:
539 printNicely(red('Sorry I can\'t show this image.'))
540
541
542 def urlopen():
543 """
544 Open url
545 """
546 t = Twitter(auth=authen())
547 try:
548 if not g['stuff'].isdigit():
549 return
550 tid = c['tweet_dict'][int(g['stuff'])]
551 tweet = t.statuses.show(id=tid)
552 link_ary = [
553 u for u in tweet['text'].split() if u.startswith('http://')]
554 if not link_ary:
555 printNicely(light_magenta('No url here @.@!'))
556 return
557 for link in link_ary:
558 webbrowser.open(link)
559 except:
560 printNicely(red('Sorry I can\'t open url in this tweet.'))
561
562
563 def ls():
564 """
565 List friends for followers
566 """
567 t = Twitter(auth=authen())
568 # Get name
569 try:
570 name = g['stuff'].split()[1]
571 if name.startswith('@'):
572 name = name[1:]
573 else:
574 printNicely(red('A name should begin with a \'@\''))
575 raise Exception('Invalid name')
576 except:
577 name = g['original_name']
578 # Get list followers or friends
579 try:
580 target = g['stuff'].split()[0]
581 except:
582 printNicely(red('Omg some syntax is wrong.'))
583 # Init cursor
584 d = {'fl': 'followers', 'fr': 'friends'}
585 next_cursor = -1
586 rel = {}
587 # Cursor loop
588 while next_cursor != 0:
589 list = getattr(t, d[target]).list(
590 screen_name=name,
591 cursor=next_cursor,
592 skip_status=True,
593 include_entities=False,
594 )
595 for u in list['users']:
596 rel[u['name']] = '@' + u['screen_name']
597 next_cursor = list['next_cursor']
598 # Print out result
599 printNicely('All: ' + str(len(rel)) + ' ' + d[target] + '.')
600 for name in rel:
601 user = ' ' + cycle_color(name)
602 user += color_func(c['TWEET']['nick'])(' ' + rel[name] + ' ')
603 printNicely(user)
604
605
606 def inbox():
607 """
608 Inbox direct messages
609 """
610 t = Twitter(auth=authen())
611 num = c['MESSAGES_DISPLAY']
612 rel = []
613 if g['stuff'].isdigit():
614 num = g['stuff']
615 cur_page = 1
616 # Max message per page is 20 so we have to loop
617 while num > 20:
618 rel = rel + t.direct_messages(
619 count=20,
620 page=cur_page,
621 include_entities=False,
622 skip_status=False
623 )
624 num -= 20
625 cur_page += 1
626 rel = rel + t.direct_messages(
627 count=num,
628 page=cur_page,
629 include_entities=False,
630 skip_status=False
631 )
632 # Display
633 printNicely('Inbox: newest ' + str(len(rel)) + ' messages.')
634 for m in reversed(rel):
635 print_message(m)
636 printNicely('')
637
638
639 def sent():
640 """
641 Sent direct messages
642 """
643 t = Twitter(auth=authen())
644 num = c['MESSAGES_DISPLAY']
645 rel = []
646 if g['stuff'].isdigit():
647 num = int(g['stuff'])
648 cur_page = 1
649 # Max message per page is 20 so we have to loop
650 while num > 20:
651 rel = rel + t.direct_messages.sent(
652 count=20,
653 page=cur_page,
654 include_entities=False,
655 skip_status=False
656 )
657 num -= 20
658 cur_page += 1
659 rel = rel + t.direct_messages.sent(
660 count=num,
661 page=cur_page,
662 include_entities=False,
663 skip_status=False
664 )
665 # Display
666 printNicely('Sent: newest ' + str(len(rel)) + ' messages.')
667 for m in reversed(rel):
668 print_message(m)
669 printNicely('')
670
671
672 def trash():
673 """
674 Remove message
675 """
676 t = Twitter(auth=authen())
677 try:
678 id = int(g['stuff'].split()[0])
679 except:
680 printNicely(red('Sorry I can\'t understand.'))
681 mid = c['message_dict'][id]
682 t.direct_messages.destroy(id=mid)
683 printNicely(green('Message deleted.'))
684
685
686 def whois():
687 """
688 Show profile of a specific user
689 """
690 t = Twitter(auth=authen())
691 screen_name = g['stuff'].split()[0]
692 if screen_name.startswith('@'):
693 try:
694 user = t.users.show(
695 screen_name=screen_name[1:],
696 include_entities=False)
697 show_profile(user)
698 except:
699 printNicely(red('Omg no user.'))
700 else:
701 printNicely(red('A name should begin with a \'@\''))
702
703
704 def follow():
705 """
706 Follow a user
707 """
708 t = Twitter(auth=authen())
709 screen_name = g['stuff'].split()[0]
710 if screen_name.startswith('@'):
711 t.friendships.create(screen_name=screen_name[1:], follow=True)
712 printNicely(green('You are following ' + screen_name + ' now!'))
713 else:
714 printNicely(red('A name should begin with a \'@\''))
715
716
717 def unfollow():
718 """
719 Unfollow a user
720 """
721 t = Twitter(auth=authen())
722 screen_name = g['stuff'].split()[0]
723 if screen_name.startswith('@'):
724 t.friendships.destroy(
725 screen_name=screen_name[1:],
726 include_entities=False)
727 printNicely(green('Unfollow ' + screen_name + ' success!'))
728 else:
729 printNicely(red('A name should begin with a \'@\''))
730
731
732 def mute():
733 """
734 Mute a user
735 """
736 t = Twitter(auth=authen())
737 try:
738 screen_name = g['stuff'].split()[0]
739 except:
740 printNicely(red('A name should be specified. '))
741 return
742 if screen_name.startswith('@'):
743 try:
744 rel = t.mutes.users.create(screen_name=screen_name[1:])
745 if isinstance(rel, dict):
746 printNicely(green(screen_name + ' is muted.'))
747 c['IGNORE_LIST'] += [unc(screen_name)]
748 c['IGNORE_LIST'] = list(set(c['IGNORE_LIST']))
749 else:
750 printNicely(red(rel))
751 except:
752 printNicely(red('Something is wrong, can not mute now :('))
753 else:
754 printNicely(red('A name should begin with a \'@\''))
755
756
757 def unmute():
758 """
759 Unmute a user
760 """
761 t = Twitter(auth=authen())
762 try:
763 screen_name = g['stuff'].split()[0]
764 except:
765 printNicely(red('A name should be specified. '))
766 return
767 if screen_name.startswith('@'):
768 try:
769 rel = t.mutes.users.destroy(screen_name=screen_name[1:])
770 if isinstance(rel, dict):
771 printNicely(green(screen_name + ' is unmuted.'))
772 c['IGNORE_LIST'].remove(screen_name)
773 else:
774 printNicely(red(rel))
775 except:
776 printNicely(red('Maybe you are not muting this person ?'))
777 else:
778 printNicely(red('A name should begin with a \'@\''))
779
780
781 def muting():
782 """
783 List muting user
784 """
785 # Get dict of muting users
786 md = build_mute_dict(dict_data=True)
787 printNicely('All: ' + str(len(md)) + ' people.')
788 for name in md:
789 user = ' ' + cycle_color(md[name])
790 user += color_func(c['TWEET']['nick'])(' ' + name + ' ')
791 printNicely(user)
792 # Update from Twitter
793 c['IGNORE_LIST'] = [n for n in md]
794
795
796 def block():
797 """
798 Block a user
799 """
800 t = Twitter(auth=authen())
801 screen_name = g['stuff'].split()[0]
802 if screen_name.startswith('@'):
803 t.blocks.create(
804 screen_name=screen_name[1:],
805 include_entities=False,
806 skip_status=True)
807 printNicely(green('You blocked ' + screen_name + '.'))
808 else:
809 printNicely(red('A name should begin with a \'@\''))
810
811
812 def unblock():
813 """
814 Unblock a user
815 """
816 t = Twitter(auth=authen())
817 screen_name = g['stuff'].split()[0]
818 if screen_name.startswith('@'):
819 t.blocks.destroy(
820 screen_name=screen_name[1:],
821 include_entities=False,
822 skip_status=True)
823 printNicely(green('Unblock ' + screen_name + ' success!'))
824 else:
825 printNicely(red('A name should begin with a \'@\''))
826
827
828 def report():
829 """
830 Report a user as a spam account
831 """
832 t = Twitter(auth=authen())
833 screen_name = g['stuff'].split()[0]
834 if screen_name.startswith('@'):
835 t.users.report_spam(
836 screen_name=screen_name[1:])
837 printNicely(green('You reported ' + screen_name + '.'))
838 else:
839 printNicely(red('Sorry I can\'t understand.'))
840
841
842 def get_slug():
843 """
844 Get Slug Decorator
845 """
846 # Get list name
847 list_name = raw_input(light_magenta('Give me the list\'s name: '))
848 # Get list name and owner
849 try:
850 owner, slug = list_name.split('/')
851 if slug.startswith('@'):
852 slug = slug[1:]
853 return owner, slug
854 except:
855 printNicely(
856 light_magenta('List name should follow "@owner/list_name" format.'))
857 raise Exception('Wrong list name')
858
859
860 def show_lists(t):
861 """
862 List list
863 """
864 rel = t.lists.list(screen_name=g['original_name'])
865 if rel:
866 print_list(rel)
867 else:
868 printNicely(light_magenta('You belong to no lists :)'))
869
870
871 def list_home(t):
872 """
873 List home
874 """
875 owner, slug = get_slug()
876 res = t.lists.statuses(
877 slug=slug,
878 owner_screen_name=owner,
879 count=c['LIST_MAX'],
880 include_entities=False)
881 for tweet in res:
882 draw(t=tweet)
883 printNicely('')
884
885
886 def list_members(t):
887 """
888 List members
889 """
890 owner, slug = get_slug()
891 # Get members
892 rel = {}
893 next_cursor = -1
894 while next_cursor != 0:
895 m = t.lists.members(
896 slug=slug,
897 owner_screen_name=owner,
898 cursor=next_cursor,
899 include_entities=False)
900 for u in m['users']:
901 rel[u['name']] = '@' + u['screen_name']
902 next_cursor = m['next_cursor']
903 printNicely('All: ' + str(len(rel)) + ' members.')
904 for name in rel:
905 user = ' ' + cycle_color(name)
906 user += color_func(c['TWEET']['nick'])(' ' + rel[name] + ' ')
907 printNicely(user)
908
909
910 def list_subscribers(t):
911 """
912 List subscribers
913 """
914 owner, slug = get_slug()
915 # Get subscribers
916 rel = {}
917 next_cursor = -1
918 while next_cursor != 0:
919 m = t.lists.subscribers(
920 slug=slug,
921 owner_screen_name=owner,
922 cursor=next_cursor,
923 include_entities=False)
924 for u in m['users']:
925 rel[u['name']] = '@' + u['screen_name']
926 next_cursor = m['next_cursor']
927 printNicely('All: ' + str(len(rel)) + ' subscribers.')
928 for name in rel:
929 user = ' ' + cycle_color(name)
930 user += color_func(c['TWEET']['nick'])(' ' + rel[name] + ' ')
931 printNicely(user)
932
933
934 def list_add(t):
935 """
936 Add specific user to a list
937 """
938 owner, slug = get_slug()
939 # Add
940 user_name = raw_input(light_magenta('Give me name of the newbie: '))
941 if user_name.startswith('@'):
942 user_name = user_name[1:]
943 try:
944 t.lists.members.create(
945 slug=slug,
946 owner_screen_name=owner,
947 screen_name=user_name)
948 printNicely(green('Added.'))
949 except:
950 printNicely(light_magenta('I\'m sorry we can not add him/her.'))
951
952
953 def list_remove(t):
954 """
955 Remove specific user from a list
956 """
957 owner, slug = get_slug()
958 # Remove
959 user_name = raw_input(light_magenta('Give me name of the unlucky one: '))
960 if user_name.startswith('@'):
961 user_name = user_name[1:]
962 try:
963 t.lists.members.destroy(
964 slug=slug,
965 owner_screen_name=owner,
966 screen_name=user_name)
967 printNicely(green('Gone.'))
968 except:
969 printNicely(light_magenta('I\'m sorry we can not remove him/her.'))
970
971
972 def list_subscribe(t):
973 """
974 Subscribe to a list
975 """
976 owner, slug = get_slug()
977 # Subscribe
978 try:
979 t.lists.subscribers.create(
980 slug=slug,
981 owner_screen_name=owner)
982 printNicely(green('Done.'))
983 except:
984 printNicely(
985 light_magenta('I\'m sorry you can not subscribe to this list.'))
986
987
988 def list_unsubscribe(t):
989 """
990 Unsubscribe a list
991 """
992 owner, slug = get_slug()
993 # Subscribe
994 try:
995 t.lists.subscribers.destroy(
996 slug=slug,
997 owner_screen_name=owner)
998 printNicely(green('Done.'))
999 except:
1000 printNicely(
1001 light_magenta('I\'m sorry you can not unsubscribe to this list.'))
1002
1003
1004 def list_own(t):
1005 """
1006 List own
1007 """
1008 rel = []
1009 next_cursor = -1
1010 while next_cursor != 0:
1011 res = t.lists.ownerships(
1012 screen_name=g['original_name'],
1013 cursor=next_cursor)
1014 rel += res['lists']
1015 next_cursor = res['next_cursor']
1016 if rel:
1017 print_list(rel)
1018 else:
1019 printNicely(light_magenta('You own no lists :)'))
1020
1021
1022 def list_new(t):
1023 """
1024 Create a new list
1025 """
1026 name = raw_input(light_magenta('New list\'s name: '))
1027 mode = raw_input(light_magenta('New list\'s mode (public/private): '))
1028 description = raw_input(light_magenta('New list\'s description: '))
1029 try:
1030 t.lists.create(
1031 name=name,
1032 mode=mode,
1033 description=description)
1034 printNicely(green(name + ' list is created.'))
1035 except:
1036 printNicely(red('Oops something is wrong with Twitter :('))
1037
1038
1039 def list_update(t):
1040 """
1041 Update a list
1042 """
1043 slug = raw_input(light_magenta('Your list that you want to update: '))
1044 name = raw_input(light_magenta('Update name (leave blank to unchange): '))
1045 mode = raw_input(light_magenta('Update mode (public/private): '))
1046 description = raw_input(light_magenta('Update description: '))
1047 try:
1048 if name:
1049 t.lists.update(
1050 slug='-'.join(slug.split()),
1051 owner_screen_name=g['original_name'],
1052 name=name,
1053 mode=mode,
1054 description=description)
1055 else:
1056 t.lists.update(
1057 slug=slug,
1058 owner_screen_name=g['original_name'],
1059 mode=mode,
1060 description=description)
1061 printNicely(green(slug + ' list is updated.'))
1062 except:
1063 printNicely(red('Oops something is wrong with Twitter :('))
1064
1065
1066 def list_delete(t):
1067 """
1068 Delete a list
1069 """
1070 slug = raw_input(light_magenta('Your list that you want to update: '))
1071 try:
1072 t.lists.destroy(
1073 slug='-'.join(slug.split()),
1074 owner_screen_name=g['original_name'])
1075 printNicely(green(slug + ' list is deleted.'))
1076 except:
1077 printNicely(red('Oops something is wrong with Twitter :('))
1078
1079
1080 def twitterlist():
1081 """
1082 Twitter's list
1083 """
1084 t = Twitter(auth=authen())
1085 # List all lists or base on action
1086 try:
1087 g['list_action'] = g['stuff'].split()[0]
1088 except:
1089 show_lists(t)
1090 return
1091 # Sub-function
1092 action_ary = {
1093 'home': list_home,
1094 'all_mem': list_members,
1095 'all_sub': list_subscribers,
1096 'add': list_add,
1097 'rm': list_remove,
1098 'sub': list_subscribe,
1099 'unsub': list_unsubscribe,
1100 'own': list_own,
1101 'new': list_new,
1102 'update': list_update,
1103 'del': list_delete,
1104 }
1105 try:
1106 return action_ary[g['list_action']](t)
1107 except:
1108 printNicely(red('Please try again.'))
1109
1110
1111 def cal():
1112 """
1113 Unix's command `cal`
1114 """
1115 # Format
1116 rel = os.popen('cal').read().split('\n')
1117 month = rel.pop(0)
1118 date = rel.pop(0)
1119 show_calendar(month, date, rel)
1120
1121
1122 def config():
1123 """
1124 Browse and change config
1125 """
1126 all_config = get_all_config()
1127 g['stuff'] = g['stuff'].strip()
1128 # List all config
1129 if not g['stuff']:
1130 for k in all_config:
1131 line = ' ' * 2 + \
1132 green(k) + ': ' + light_yellow(str(all_config[k]))
1133 printNicely(line)
1134 guide = 'Detailed explanation can be found at ' + \
1135 color_func(c['TWEET']['link'])(
1136 'http://rainbowstream.readthedocs.org/en/latest/#config-explanation')
1137 printNicely(guide)
1138 # Print specific config
1139 elif len(g['stuff'].split()) == 1:
1140 if g['stuff'] in all_config:
1141 k = g['stuff']
1142 line = ' ' * 2 + \
1143 green(k) + ': ' + light_yellow(str(all_config[k]))
1144 printNicely(line)
1145 else:
1146 printNicely(red('No such config key.'))
1147 # Print specific config's default value
1148 elif len(g['stuff'].split()) == 2 and g['stuff'].split()[-1] == 'default':
1149 key = g['stuff'].split()[0]
1150 try:
1151 value = get_default_config(key)
1152 line = ' ' * 2 + green(key) + ': ' + light_magenta(value)
1153 printNicely(line)
1154 except Exception as e:
1155 printNicely(red(e))
1156 # Delete specific config key in config file
1157 elif len(g['stuff'].split()) == 2 and g['stuff'].split()[-1] == 'drop':
1158 key = g['stuff'].split()[0]
1159 try:
1160 delete_config(key)
1161 printNicely(green('Config key is dropped.'))
1162 except Exception as e:
1163 printNicely(red(e))
1164 # Set specific config
1165 elif len(g['stuff'].split()) == 3 and g['stuff'].split()[1] == '=':
1166 key = g['stuff'].split()[0]
1167 value = g['stuff'].split()[-1]
1168 if key == 'THEME' and not validate_theme(value):
1169 printNicely(red('Invalid theme\'s value.'))
1170 return
1171 try:
1172 set_config(key, value)
1173 # Apply theme immediately
1174 if key == 'THEME':
1175 c['THEME'] = reload_theme(value, c['THEME'])
1176 g['decorated_name'] = lambda x: color_func(
1177 c['DECORATED_NAME'])('[' + x + ']: ')
1178 reload_config()
1179 printNicely(green('Updated successfully.'))
1180 except Exception as e:
1181 printNicely(red(e))
1182 else:
1183 printNicely(light_magenta('Sorry I can\'s understand.'))
1184
1185
1186 def theme():
1187 """
1188 List and change theme
1189 """
1190 if not g['stuff']:
1191 # List themes
1192 for theme in g['themes']:
1193 line = light_magenta(theme)
1194 if c['THEME'] == theme:
1195 line = ' ' * 2 + light_yellow('* ') + line
1196 else:
1197 line = ' ' * 4 + line
1198 printNicely(line)
1199 else:
1200 # Change theme
1201 try:
1202 # Load new theme
1203 c['THEME'] = reload_theme(g['stuff'], c['THEME'])
1204 # Redefine decorated_name
1205 g['decorated_name'] = lambda x: color_func(
1206 c['DECORATED_NAME'])(
1207 '[' + x + ']: ')
1208 printNicely(green('Theme changed.'))
1209 except Exception as e:
1210 print(e)
1211 printNicely(red('No such theme exists.'))
1212
1213
1214 def help_discover():
1215 """
1216 Discover the world
1217 """
1218 s = ' ' * 2
1219 # Discover the world
1220 usage = '\n'
1221 usage += s + grey(u'\u266A' + ' Discover the world \n')
1222 usage += s * 2 + light_green('trend') + ' will show global trending topics. ' + \
1223 'You can try ' + light_green('trend US') + ' or ' + \
1224 light_green('trend JP Tokyo') + '.\n'
1225 usage += s * 2 + light_green('home') + ' will show your timeline. ' + \
1226 light_green('home 7') + ' will show 7 tweets.\n'
1227 usage += s * 2 + light_green('mentions') + ' will show mentions timeline. ' + \
1228 light_green('mentions 7') + ' will show 7 mention tweets.\n'
1229 usage += s * 2 + light_green('whois @mdo') + ' will show profile of ' + \
1230 magenta('@mdo') + '.\n'
1231 usage += s * 2 + light_green('view @mdo') + \
1232 ' will show ' + magenta('@mdo') + '\'s home.\n'
1233 usage += s * 2 + light_green('s AKB48') + ' will search for "' + \
1234 light_yellow('AKB48') + '" and return 5 newest tweet. ' + \
1235 'Search can be performed with or without hashtag.\n'
1236 printNicely(usage)
1237
1238
1239 def help_tweets():
1240 """
1241 Tweets
1242 """
1243 s = ' ' * 2
1244 # Tweet
1245 usage = '\n'
1246 usage += s + grey(u'\u266A' + ' Tweets \n')
1247 usage += s * 2 + light_green('t oops ') + \
1248 'will tweet "' + light_yellow('oops') + '" immediately.\n'
1249 usage += s * 2 + \
1250 light_green('rt 12 ') + ' will retweet to tweet with ' + \
1251 light_yellow('[id=12]') + '.\n'
1252 usage += s * 2 + \
1253 light_green('quote 12 ') + ' will quote the tweet with ' + \
1254 light_yellow('[id=12]') + '. If no extra text is added, ' + \
1255 'the quote will be canceled.\n'
1256 usage += s * 2 + \
1257 light_green('allrt 12 20 ') + ' will list 20 newest retweet of the tweet with ' + \
1258 light_yellow('[id=12]') + '.\n'
1259 usage += s * 2 + light_green('rep 12 oops') + ' will reply "' + \
1260 light_yellow('oops') + '" to tweet with ' + \
1261 light_yellow('[id=12]') + '.\n'
1262 usage += s * 2 + \
1263 light_green('fav 12 ') + ' will favorite the tweet with ' + \
1264 light_yellow('[id=12]') + '.\n'
1265 usage += s * 2 + \
1266 light_green('ufav 12 ') + ' will unfavorite tweet with ' + \
1267 light_yellow('[id=12]') + '.\n'
1268 usage += s * 2 + \
1269 light_green('del 12 ') + ' will delete tweet with ' + \
1270 light_yellow('[id=12]') + '.\n'
1271 usage += s * 2 + light_green('show image 12') + ' will show image in tweet with ' + \
1272 light_yellow('[id=12]') + ' in your OS\'s image viewer.\n'
1273 usage += s * 2 + light_green('open 12') + ' will open url in tweet with ' + \
1274 light_yellow('[id=12]') + ' in your OS\'s default browser.\n'
1275 printNicely(usage)
1276
1277
1278 def help_messages():
1279 """
1280 Messages
1281 """
1282 s = ' ' * 2
1283 # Direct message
1284 usage = '\n'
1285 usage += s + grey(u'\u266A' + ' Direct messages \n')
1286 usage += s * 2 + light_green('inbox') + ' will show inbox messages. ' + \
1287 light_green('inbox 7') + ' will show newest 7 messages.\n'
1288 usage += s * 2 + light_green('sent') + ' will show sent messages. ' + \
1289 light_green('sent 7') + ' will show newest 7 messages.\n'
1290 usage += s * 2 + light_green('mes @dtvd88 hi') + ' will send a "hi" messege to ' + \
1291 magenta('@dtvd88') + '.\n'
1292 usage += s * 2 + light_green('trash 5') + ' will remove message with ' + \
1293 light_yellow('[message_id=5]') + '.\n'
1294 printNicely(usage)
1295
1296
1297 def help_friends_and_followers():
1298 """
1299 Friends and Followers
1300 """
1301 s = ' ' * 2
1302 # Follower and following
1303 usage = '\n'
1304 usage += s + grey(u'\u266A' + ' Friends and followers \n')
1305 usage += s * 2 + \
1306 light_green('ls fl') + \
1307 ' will list all followers (people who are following you).\n'
1308 usage += s * 2 + \
1309 light_green('ls fr') + \
1310 ' will list all friends (people who you are following).\n'
1311 usage += s * 2 + light_green('fl @dtvd88') + ' will follow ' + \
1312 magenta('@dtvd88') + '.\n'
1313 usage += s * 2 + light_green('ufl @dtvd88') + ' will unfollow ' + \
1314 magenta('@dtvd88') + '.\n'
1315 usage += s * 2 + light_green('mute @dtvd88') + ' will mute ' + \
1316 magenta('@dtvd88') + '.\n'
1317 usage += s * 2 + light_green('unmute @dtvd88') + ' will unmute ' + \
1318 magenta('@dtvd88') + '.\n'
1319 usage += s * 2 + light_green('muting') + ' will list muting users.\n'
1320 usage += s * 2 + light_green('block @dtvd88') + ' will block ' + \
1321 magenta('@dtvd88') + '.\n'
1322 usage += s * 2 + light_green('unblock @dtvd88') + ' will unblock ' + \
1323 magenta('@dtvd88') + '.\n'
1324 usage += s * 2 + light_green('report @dtvd88') + ' will report ' + \
1325 magenta('@dtvd88') + ' as a spam account.\n'
1326 printNicely(usage)
1327
1328
1329 def help_list():
1330 """
1331 Lists
1332 """
1333 s = ' ' * 2
1334 # Twitter list
1335 usage = '\n'
1336 usage += s + grey(u'\u266A' + ' Twitter list\n')
1337 usage += s * 2 + light_green('list') + \
1338 ' will show all lists you are belong to.\n'
1339 usage += s * 2 + light_green('list home') + \
1340 ' will show timeline of list. You will be asked for list\'s name.\n'
1341 usage += s * 2 + light_green('list all_mem') + \
1342 ' will show list\'s all members.\n'
1343 usage += s * 2 + light_green('list all_sub') + \
1344 ' will show list\'s all subscribers.\n'
1345 usage += s * 2 + light_green('list add') + \
1346 ' will add specific person to a list owned by you.' + \
1347 ' You will be asked for list\'s name and person\'s name.\n'
1348 usage += s * 2 + light_green('list rm') + \
1349 ' will remove specific person from a list owned by you.' + \
1350 ' You will be asked for list\'s name and person\'s name.\n'
1351 usage += s * 2 + light_green('list sub') + \
1352 ' will subscribe you to a specific list.\n'
1353 usage += s * 2 + light_green('list unsub') + \
1354 ' will unsubscribe you from a specific list.\n'
1355 usage += s * 2 + light_green('list own') + \
1356 ' will show all list owned by you.\n'
1357 usage += s * 2 + light_green('list new') + \
1358 ' will create a new list.\n'
1359 usage += s * 2 + light_green('list update') + \
1360 ' will update a list owned by you.\n'
1361 usage += s * 2 + light_green('list del') + \
1362 ' will delete a list owned by you.\n'
1363 printNicely(usage)
1364
1365
1366 def help_stream():
1367 """
1368 Stream switch
1369 """
1370 s = ' ' * 2
1371 # Switch
1372 usage = '\n'
1373 usage += s + grey(u'\u266A' + ' Switching streams \n')
1374 usage += s * 2 + light_green('switch public #AKB') + \
1375 ' will switch to public stream and follow "' + \
1376 light_yellow('AKB') + '" keyword.\n'
1377 usage += s * 2 + light_green('switch mine') + \
1378 ' will switch to your personal stream.\n'
1379 usage += s * 2 + light_green('switch mine -f ') + \
1380 ' will prompt to enter the filter.\n'
1381 usage += s * 3 + light_yellow('Only nicks') + \
1382 ' filter will decide nicks will be INCLUDE ONLY.\n'
1383 usage += s * 3 + light_yellow('Ignore nicks') + \
1384 ' filter will decide nicks will be EXCLUDE.\n'
1385 usage += s * 2 + light_green('switch mine -d') + \
1386 ' will use the config\'s ONLY_LIST and IGNORE_LIST.\n'
1387 printNicely(usage)
1388
1389
1390 def help():
1391 """
1392 Help
1393 """
1394 s = ' ' * 2
1395 h, w = os.popen('stty size', 'r').read().split()
1396 # Start
1397 usage = '\n'
1398 usage += s + 'Hi boss! I\'m ready to serve you right now!\n'
1399 usage += s + '-' * (int(w) - 4) + '\n'
1400 usage += s + 'You are ' + \
1401 light_yellow('already') + ' on your personal stream.\n'
1402 usage += s + 'Any update from Twitter will show up ' + \
1403 light_yellow('immediately') + '.\n'
1404 usage += s + 'In addtion, following commands are available right now:\n'
1405 # Twitter help section
1406 usage += '\n'
1407 usage += s + grey(u'\u266A' + ' Twitter help\n')
1408 usage += s * 2 + light_green('h discover') + \
1409 ' will show help for discover commands.\n'
1410 usage += s * 2 + light_green('h tweets') + \
1411 ' will show help for tweets commands.\n'
1412 usage += s * 2 + light_green('h messages') + \
1413 ' will show help for messages commands.\n'
1414 usage += s * 2 + light_green('h friends_and_followers') + \
1415 ' will show help for friends and followers commands.\n'
1416 usage += s * 2 + light_green('h list') + \
1417 ' will show help for list commands.\n'
1418 usage += s * 2 + light_green('h stream') + \
1419 ' will show help for stream commands.\n'
1420 # Smart shell
1421 usage += '\n'
1422 usage += s + grey(u'\u266A' + ' Smart shell\n')
1423 usage += s * 2 + light_green('111111 * 9 / 7') + ' or any math expression ' + \
1424 'will be evaluate by Python interpreter.\n'
1425 usage += s * 2 + 'Even ' + light_green('cal') + ' will show the calendar' + \
1426 ' for current month.\n'
1427 # Config
1428 usage += '\n'
1429 usage += s + grey(u'\u266A' + ' Config \n')
1430 usage += s * 2 + light_green('theme') + ' will list available theme. ' + \
1431 light_green('theme monokai') + ' will apply ' + light_yellow('monokai') + \
1432 ' theme immediately.\n'
1433 usage += s * 2 + light_green('config') + ' will list all config.\n'
1434 usage += s * 3 + \
1435 light_green('config ASCII_ART') + ' will output current value of ' +\
1436 light_yellow('ASCII_ART') + ' config key.\n'
1437 usage += s * 3 + \
1438 light_green('config TREND_MAX default') + ' will output default value of ' + \
1439 light_yellow('TREND_MAX') + ' config key.\n'
1440 usage += s * 3 + \
1441 light_green('config CUSTOM_CONFIG drop') + ' will drop ' + \
1442 light_yellow('CUSTOM_CONFIG') + ' config key.\n'
1443 usage += s * 3 + \
1444 light_green('config IMAGE_ON_TERM = true') + ' will set value of ' + \
1445 light_yellow('IMAGE_ON_TERM') + ' config key to ' + \
1446 light_yellow('True') + '.\n'
1447 # Screening
1448 usage += '\n'
1449 usage += s + grey(u'\u266A' + ' Screening \n')
1450 usage += s * 2 + light_green('h') + ' will show this help again.\n'
1451 usage += s * 2 + light_green('p') + ' will pause the stream.\n'
1452 usage += s * 2 + light_green('r') + ' will unpause the stream.\n'
1453 usage += s * 2 + light_green('c') + ' will clear the screen.\n'
1454 usage += s * 2 + light_green('q') + ' will quit.\n'
1455 # End
1456 usage += '\n'
1457 usage += s + '-' * (int(w) - 4) + '\n'
1458 usage += s + 'Have fun and hang tight! \n'
1459 # Show help
1460 d = {
1461 'discover': help_discover,
1462 'tweets': help_tweets,
1463 'messages': help_messages,
1464 'friends_and_followers': help_friends_and_followers,
1465 'list': help_list,
1466 'stream': help_stream,
1467 }
1468 if g['stuff']:
1469 d.get(
1470 g['stuff'].strip(),
1471 lambda: printNicely(red('No such command.'))
1472 )()
1473 else:
1474 printNicely(usage)
1475
1476
1477 def pause():
1478 """
1479 Pause stream display
1480 """
1481 c['pause'] = True
1482 printNicely(green('Stream is paused'))
1483
1484
1485 def replay():
1486 """
1487 Replay stream
1488 """
1489 c['pause'] = False
1490 printNicely(green('Stream is running back now'))
1491
1492
1493 def clear():
1494 """
1495 Clear screen
1496 """
1497 os.system('clear')
1498
1499
1500 def quit():
1501 """
1502 Exit all
1503 """
1504 try:
1505 save_history()
1506 printNicely(green('See you next time :)'))
1507 except:
1508 pass
1509 sys.exit()
1510
1511
1512 def reset():
1513 """
1514 Reset prefix of line
1515 """
1516 if g['reset']:
1517 if c.get('USER_JSON_ERROR'):
1518 printNicely(red('Your ~/.rainbow_config.json is messed up:'))
1519 printNicely(red('>>> ' + c['USER_JSON_ERROR']))
1520 printNicely('')
1521 printNicely(magenta('Need tips ? Type "h" and hit Enter key!'))
1522 g['reset'] = False
1523 try:
1524 printNicely(str(eval(g['cmd'])))
1525 except Exception:
1526 pass
1527
1528
1529 def process(cmd):
1530 """
1531 Process switch
1532 """
1533 return dict(zip(
1534 cmdset,
1535 [
1536 switch,
1537 trend,
1538 home,
1539 view,
1540 mentions,
1541 tweet,
1542 retweet,
1543 quote,
1544 allretweet,
1545 favorite,
1546 reply,
1547 delete,
1548 unfavorite,
1549 search,
1550 message,
1551 show,
1552 urlopen,
1553 ls,
1554 inbox,
1555 sent,
1556 trash,
1557 whois,
1558 follow,
1559 unfollow,
1560 mute,
1561 unmute,
1562 muting,
1563 block,
1564 unblock,
1565 report,
1566 twitterlist,
1567 cal,
1568 config,
1569 theme,
1570 help,
1571 pause,
1572 replay,
1573 clear,
1574 quit
1575 ]
1576 )).get(cmd, reset)
1577
1578
1579 def listen():
1580 """
1581 Listen to user's input
1582 """
1583 d = dict(zip(
1584 cmdset,
1585 [
1586 ['public', 'mine'], # switch
1587 [], # trend
1588 [], # home
1589 ['@'], # view
1590 [], # mentions
1591 [], # tweet
1592 [], # retweet
1593 [], # quote
1594 [], # allretweet
1595 [], # favorite
1596 [], # reply
1597 [], # delete
1598 [], # unfavorite
1599 ['#'], # search
1600 ['@'], # message
1601 ['image'], # show image
1602 [''], # open url
1603 ['fl', 'fr'], # list
1604 [], # inbox
1605 [], # sent
1606 [], # trash
1607 ['@'], # whois
1608 ['@'], # follow
1609 ['@'], # unfollow
1610 ['@'], # mute
1611 ['@'], # unmute
1612 ['@'], # muting
1613 ['@'], # block
1614 ['@'], # unblock
1615 ['@'], # report
1616 [
1617 'home',
1618 'all_mem',
1619 'all_sub',
1620 'add',
1621 'rm',
1622 'sub',
1623 'unsub',
1624 'own',
1625 'new',
1626 'update',
1627 'del'
1628 ], # list
1629 [], # cal
1630 [key for key in dict(get_all_config())], # config
1631 g['themes'], # theme
1632 [
1633 'discover',
1634 'tweets',
1635 'messages',
1636 'friends_and_followers',
1637 'list',
1638 'stream'
1639 ], # help
1640 [], # pause
1641 [], # reconnect
1642 [], # clear
1643 [], # quit
1644 ]
1645 ))
1646 init_interactive_shell(d)
1647 read_history()
1648 reset()
1649 while True:
1650 # raw_input
1651 if g['prefix']:
1652 line = raw_input(g['decorated_name'](c['PREFIX']))
1653 else:
1654 line = raw_input()
1655 # Save previous cmd in order to compare with readline buffer
1656 g['previous_cmd'] = line.strip()
1657 try:
1658 cmd = line.split()[0]
1659 except:
1660 cmd = ''
1661 g['cmd'] = cmd
1662 try:
1663 # Lock the semaphore
1664 c['lock'] = True
1665 # Save cmd to global variable and call process
1666 g['stuff'] = ' '.join(line.split()[1:])
1667 # Process the command
1668 process(cmd)()
1669 # Not re-display
1670 if cmd in ['switch', 't', 'rt', 'rep']:
1671 g['prefix'] = False
1672 else:
1673 g['prefix'] = True
1674 # Release the semaphore lock
1675 c['lock'] = False
1676 except Exception:
1677 printNicely(red('OMG something is wrong with Twitter right now.'))
1678
1679
1680 def stream(domain, args, name='Rainbow Stream'):
1681 """
1682 Track the stream
1683 """
1684 # The Logo
1685 art_dict = {
1686 c['USER_DOMAIN']: name,
1687 c['PUBLIC_DOMAIN']: args.track_keywords,
1688 c['SITE_DOMAIN']: name,
1689 }
1690 if c['ASCII_ART']:
1691 ascii_art(art_dict[domain])
1692 # These arguments are optional:
1693 stream_args = dict(
1694 timeout=0.5, # To check g['stream_stop'] after each 0.5 s
1695 block=not args.no_block,
1696 heartbeat_timeout=args.heartbeat_timeout)
1697 # Track keyword
1698 query_args = dict()
1699 if args.track_keywords:
1700 query_args['track'] = args.track_keywords
1701 # Get stream
1702 stream = TwitterStream(
1703 auth=authen(),
1704 domain=domain,
1705 **stream_args)
1706 try:
1707 if domain == c['USER_DOMAIN']:
1708 tweet_iter = stream.user(**query_args)
1709 elif domain == c['SITE_DOMAIN']:
1710 tweet_iter = stream.site(**query_args)
1711 else:
1712 if args.track_keywords:
1713 tweet_iter = stream.statuses.filter(**query_args)
1714 else:
1715 tweet_iter = stream.statuses.sample()
1716 # Block new stream until other one exits
1717 StreamLock.acquire()
1718 g['stream_stop'] = False
1719 for tweet in tweet_iter:
1720 if tweet is None:
1721 printNicely("-- None --")
1722 elif tweet is Timeout:
1723 if(g['stream_stop']):
1724 StreamLock.release()
1725 break
1726 elif tweet is HeartbeatTimeout:
1727 printNicely("-- Heartbeat Timeout --")
1728 elif tweet is Hangup:
1729 printNicely("-- Hangup --")
1730 elif tweet.get('text'):
1731 draw(
1732 t=tweet,
1733 keyword=args.track_keywords,
1734 check_semaphore=True,
1735 fil=args.filter,
1736 ig=args.ignore,
1737 )
1738 # Current readline buffer
1739 current_buffer = readline.get_line_buffer().strip()
1740 # There is an unexpected behaviour in MacOSX readline + Python 2:
1741 # after completely delete a word after typing it,
1742 # somehow readline buffer still contains
1743 # the 1st character of that word
1744 if current_buffer and g['previous_cmd'] != current_buffer:
1745 sys.stdout.write(
1746 g['decorated_name'](c['PREFIX']) + unc(current_buffer))
1747 sys.stdout.flush()
1748 elif not c['HIDE_PROMPT']:
1749 sys.stdout.write(g['decorated_name'](c['PREFIX']))
1750 sys.stdout.flush()
1751 elif tweet.get('direct_message'):
1752 print_message(tweet['direct_message'], check_semaphore=True)
1753 except TwitterHTTPError:
1754 printNicely('')
1755 printNicely(
1756 magenta("We have maximum connection problem with twitter'stream API right now :("))
1757
1758
1759 def fly():
1760 """
1761 Main function
1762 """
1763 # Initial
1764 args = parse_arguments()
1765 try:
1766 init(args)
1767 except TwitterHTTPError:
1768 printNicely('')
1769 printNicely(
1770 magenta("We have connection problem with twitter'stream API right now :("))
1771 printNicely(magenta("Let's try again later."))
1772 save_history()
1773 sys.exit()
1774 # Spawn stream thread
1775 th = threading.Thread(
1776 target=stream,
1777 args=(
1778 c['USER_DOMAIN'],
1779 args,
1780 g['original_name']))
1781 th.daemon = True
1782 th.start()
1783 # Start listen process
1784 time.sleep(0.5)
1785 g['reset'] = True
1786 g['prefix'] = True
1787 listen()
|
__label__pos
| 0.986986 |
Information
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location.
binder2nd Class
A template class providing a constructor that converts a binary function object into a unary function object by binding the second argument of the binary function to a specified value.
template<class Operation>
class binder2nd
: public unary_function <
typename Operation::first_argument_type,
typename Operation::result_type>
{
public:
typedef typename Operation::argument_type argument_type;
typedef typename Operation::result_type result_type;
binder2nd(
const Operation& _Func,
const typename Operation::second_argument_type& _Right
);
result_type operator()(
const argument_type& _Left
) const;
result_type operator()(
argument_type& _Left
) const;
protected:
Operation op;
typename Operation::second_argument_type value;
};
_Func
The binary function object to be converted to a unary function object.
_Right
The value to which the second argument of the binary function object is to be bound.
_Left
The value of the argument that the adapted binary object compares to the fixed value of the second argument.
The unary function object that results from binding the second argument of the binary function object to the value _Right.
The template class stores a copy of a binary function object _Func in op, and a copy of _Right in value. It defines its member function operator() as returning op(_Left, value).
If _Func is an object of type Operation and c is a constant, then bind2nd ( _Func, c ) is equivalent to the binder2nd class constructor binder2nd<Operation> ( _Func, c ) and more convenient.
// functional_binder2nd.cpp
// compile with: /EHsc
#include <vector>
#include <functional>
#include <algorithm>
#include <iostream>
using namespace std;
int main()
{
vector<int> v1;
vector<int>::iterator Iter;
int i;
for (i = 0; i <= 5; i++)
{
v1.push_back(5 * i);
}
cout << "The vector v1 = ( ";
for (Iter = v1.begin(); Iter != v1.end(); Iter++)
cout << *Iter << " ";
cout << ")" << endl;
// Count the number of integers > 10 in the vector
vector<int>::iterator::difference_type result1;
result1 = count_if(v1.begin(), v1.end(),
binder2nd<greater<int> >(greater<int>(), 10));
cout << "The number of elements in v1 greater than 10 is: "
<< result1 << "." << endl;
// Compare using binder1st fixing 1st argument:
// count the number of integers < 10 in the vector
vector<int>::iterator::difference_type result2;
result2 = count_if(v1.begin(), v1.end(),
binder1st<greater<int> >(greater<int>(), 10));
cout << "The number of elements in v1 less than 10 is: "
<< result2 << "." << endl;
}
The vector v1 = ( 0 5 10 15 20 25 )
The number of elements in v1 greater than 10 is: 3.
The number of elements in v1 less than 10 is: 2.
Header: <functional>
Namespace: std
Show:
© 2014 Microsoft. All rights reserved.
|
__label__pos
| 0.976464 |
How to Back Up Your iPhone or iPad on Windows
Keep your iOS data stored on your Windows PC just in case.
Itunes Featured Image Min
Backing up your iPhone is a reliable way to secure your data if your device gets misplaced, stolen, or otherwise damaged. Thankfully, you can easily back up your iPhone on your Windows PC. This article will explain how to easily back up your iOS devices to your Windows computer using iTunes.
Get iTunes on Your PC
To start backing up your iOS device on your Windows computer, get the iTunes app from Apple’s official website, or grab iTunes for Windows from Apple’s Support page if you can’t access the Microsoft Store.
1. Open iTunes on your PC once it is installed.
2. Sign in with your Apple ID and password by going to “Account -> Sign In.”
1. Connect your Apple device to your Windows computer using a USB or USB-C charging cable.
2. Click “Continue” in the pop-up window that requests access to your iOS device.
Itunes Access Pop Up 1
1. Click “Trust” in the “Trust This Computer?” dialog that pops up on your iOS device to connect your device and computer for syncing purposes.
Itunes Trust This Computer 1
if your iOS device is connected to the computer, you will see a tiny smartphone icon underneath “Account” on your PC.
Start the Backup Process
1. Click on the smartphone icon at the top.
1. Select the “Summary” option.
1. On the right, you should spot “Backups,” where all your options to back up your devices are listed.
1. If you opt to back up automatically, you will have two choices: The first is the iCloud option, where your smartphone will be automatically backed up over Wi-Fi (provided it’s charging). Alternatively, if you choose the “This computer” option, you can back up your iOS device only when it is connected to your computer.
Itunes Automatic Backup 1
1. Click “Back Up Now” to back up your files manually. You don’t need Wi-Fi for this but must leave your device connected to your PC for as long as it backs up.
Note: manually backing up your devices will not interfere with any automatic backup settings you have set up.
Itunes Manual Backup 1
1. Choose to encrypt your device’s backup if you’re storing sensitive info such as passwords and check the “Encrypt Local Backup” box before starting the process. A password is required to encrypt your local backup. Store it safely, as you cannot recover it if you forget it.
Itunes Encrypt Backup 1
1. A bar at the top of the iTunes window will show the backup progress. Do not disconnect your iOS device or turn off your computer until the backup is complete.
1. Safely eject your iOS device from your Windows computer once the backup is complete to ensure that all your files, personal data and information are secured. To confirm that your device has been backed up, check the date and time for the completed backup under “Latest Backup.”
Itunes Latest Backup 1
Frequently Asked Questions
Can I back up my iOS device without iTunes?
Yes, you can back up your iOS device through iCloud by setting up iCloud to back up your files automatically at a certain interval. The downside to iCloud backups is that you need to have enough iCloud storage space; if you run out of space, your devices stop backing up.
You can also use third-party software to back up your iOS devices. These provide settings that help you manage and control how much data you decide to back up. Most third-party backup software allow users to have partial backups and the option to back up directly to a computer’s hard drive.
How do I purchase additional storage space on iCloud?
To purchase additional storage, go to “Settings” on your phone, tap your name, then click “iCloud.” Here you should see exactly how much storage you’ve used and how much of each data type is contained in your iCloud storage. Click “Manage Storage,” then “Upgrade” to upgrade to iCloud+.
If you’re skeptical about the subscription prices, remember that iCloud+ comes with premium features aside from more storage space. These include “Custom Email Domain,” “Private Replay (Beta),” and “HomeKit Secure Video Recording,” among others. If you don’t want to upgrade, back up your data on your PC.
Can I back up my iPhone and iPad to the same Windows computer?
Yes. You can back up different iOS devices to the same Windows computer, as iTunes creates different backups for each device. View all the backups on your computer by going to iTunes, clicking on “Edit,” “Preferences,” then “Devices.” All encrypted backups have a padlock icon in front of the date of their last backup.
Image credit: Walling via Unsplash All screenshots by Tayo Sogbesan.
Subscribe to our newsletter!
Our latest tutorials delivered straight to your inbox
|
__label__pos
| 0.598992 |
type choice = enum { One, Two }; type entier = int; const PI = 3; const choix = One; node toto (x:int) returns (a:int;y:int) var c : choice clock;b1, b2:entier; let a = PI fby (a+1); c = if a=x then One else Two ; b1 = 1 when One(c); b2 = 2 when Two(c); y = merge c (One -> b1) (Two -> b2); tel node test (x:bool) returns (y:int) var a:int; b:int; let (a,b) = toto(if x then 0 else 1); y = a ; tel
|
__label__pos
| 0.999997 |
Creating a MERN stack app that uses Firebase Authentication - Part Two
1/25/2022 · 8 minute read · 0 comments · 2199 views
Orginally posted on dev.to.
My favorite stack to use is the MERN stack. For those of you who aren’t sure what the acronym stands for its MongoDB, Express, React, and Node. These are frameworks and libraries that offers a powerful way to bootstrap a new application. Paired with Firebase it’s relatively simple to deliver a safe authentication system that you can use both on the back end and the front end of your application.
This article series will cover the following things:
• Creating an Express server with a MongoDB database connected and using Firebase Admin SDK. Check out Part One.
• Setting up a client side React App that uses Firebase for authentication.
• If you just want take a look at the code and can divine more from that, check out the public repo I created.
React Front End
One important note, for the Front End I used Vite to bootstrap the application, but you could easily use Create React App as well.
client/src/main.jsx
import React from "react";
import ReactDOM from "react-dom";
import "./index.css";
import App from "./App";
import { StoreProvider } from "easy-peasy";
import store from "./stores/store";
ReactDOM.render(
<React.StrictMode>
<StoreProvider store={store}>
<App />
</StoreProvider>
</React.StrictMode>,
document.getElementById("root")
);
This is the main entry point into our application. Everything here is pretty standard for React, but one important thing to note is we’re using a library called Easy Peasy. It essentially is a state management library and is very simple to setup, being a wrapper around Redux.
client/src/stores/store.js
import { createStore, action } from "easy-peasy";
const store = createStore({
authorized: false,
setAuthorized: action((state, payload) => {
state.authorized = true;
}),
setUnauthorized: action((state, payload) => {
state.authorized = false;
})
});
export default store;
This is our setup for Easy Peasy. We’re tracking just a single state variable, but you could easily add more things to store here. Once we’re logged into Firebase or there’s a change in the auth state, we’ll be using the functions here to update and modify the boolean on if the user is authorized. If Easy Peasy isn’t your speed, you could easily replace this with Redux, Recoil, Mobx, the Context API, or any other state management solution.
client/src/services/firebase.js
import { initializeApp } from "firebase/app";
import { getAuth } from "firebase/auth";
const firebaseConfig = {
apiKey: "",
authDomain: "",
projectId: "",
storageBucket: "",
messagingSenderId: "",
appId: ""
};
initializeApp(firebaseConfig);
const auth = getAuth();
export default {
auth
};
Much like our Back End, we have to setup our Firebase service. The firebaseConfig is something you will get when you create a new project and add a web app to your project. I left it blank for a good reason as I didn’t want to share the information on my Firebase project. That being said, all you need to do is copy and paste your information from Firebase and you should be good to go.
client/src/App.jsx
import "./App.css";
import UnauthorizedRoutes from "./routes/UnauthorizedRoutes";
import AuthorizedRoutes from "./routes/AuthorizedRoutes";
import { useStoreState, useStoreActions } from "easy-peasy";
import firebaseService from "./services/firebase";
import { useEffect, useState } from "react";
function App() {
const [loading, setLoading] = useState(true);
const authorized = useStoreState((state) => state.authorized);
const setAuthorized = useStoreActions((actions) => actions.setAuthorized);
const setUnauthorized = useStoreActions((actions) => actions.setUnauthorized);
const authStateListener = () => {
firebaseService.auth.onAuthStateChanged(async (user) => {
if (!user) {
setLoading(false);
return setUnauthorized();
}
setLoading(false);
return setAuthorized();
});
};
useEffect(() => {
authStateListener();
}, [authStateListener]);
return (
<div className="App" style={{ padding: 16 }}>
{loading ? (
<p>Loading...</p>
) : authorized ? (
<AuthorizedRoutes />
) : (
<UnauthorizedRoutes />
)}
</div>
);
}
export default App;
In our App.jsx we tackle a few different things. First off we make sure we show a loading indication when the app first renders, because we’re essentially showing certain routes depending on we’re authenticated or not. The authStateListener function monitors through a useEffect the Firebase authentication state. If there’s a user, it sets the global state to true through Easy Peasy, otherwise it’s false.
client/src/routes/AuthorizedRoutes.jsx
import { BrowserRouter as Router, Routes, Route } from "react-router-dom";
import AuthorizedNav from "../components/navigation/AuthorizedNav";
import DashboardPage from "../components/pages/Dashboard";
export default function UnauthorizedRoutes() {
return (
<Router>
<AuthorizedNav />
<Routes>
<Route path="/" element={<DashboardPage />} />
<Route
path="*"
element={
<main>
<p>Not found.</p>
</main>
}
/>
</Routes>
</Router>
);
}
If we are Authorized through Firebase Authentication, we have access to these routes. Right now it’s a single route with a dashboard page being rendered. One could easily add more routes that can only be seen while logged in, such as a Settings page, or anything that is relevant to the type of app it’s supposed to be.
client/src/routes/UnauthorizeRoutes.jsx
import { BrowserRouter as Router, Routes, Route } from "react-router-dom";
import UnauthorizedNav from "../components/navigation/UnauthorizedNav";
import HomePage from "../components/pages/Home";
import SignInPage from "../components/pages/SignIn";
import SignUpPage from "../components/pages/SignUp";
export default function UnauthorizedRoutes() {
return (
<Router>
<UnauthorizedNav />
<Routes>
<Route path="/" element={<HomePage />} />
<Route path="/signup" element={<SignUpPage />} />
<Route path="/signin" element={<SignInPage />} />
<Route
path="*"
element={
<main>
<p>Not found.</p>
</main>
}
/>
</Routes>
</Router>
);
}
If we are logged out, we can only Sign Up, Sign In, or see our Homepage. Just like with our authorized routes, you could easily add more routes, things like a forgot password route, about page, contact page, and etc….
client/src/components/navigation/AuthorizedNav.jsx
import { Link } from "react-router-dom";
import firebaseService from "../../services/firebase";
export default function AuthorizedNav() {
const logUserOut = async () => {
await firebaseService.auth.signOut();
};
return (
<nav>
<ul style={{ listStyleType: "none", display: "flex" }}>
<li style={{ marginRight: ".5rem" }}>
<Link to="/">Dashboard</Link>
</li>
<li>
<button
style={{
textDecoration: "underline",
border: "none",
backgroundColor: "inherit",
fontSize: "1rem",
padding: 0
}}
onClick={logUserOut}
>
Sign Out
</button>
</li>
</ul>
</nav>
);
}
Our navigation reflects the routes we have while we are authenticated. However, our sign out performs and action through Firebase. This will trickle back up to our App.jsx and kick us out of any authorized routes.
client/src/components/navigation/UnauthorizedNav.jsx
import { Link } from "react-router-dom";
export default function UnauthorizedNav() {
return (
<nav>
<ul style={{ listStyleType: "none", display: "flex" }}>
<li style={{ marginRight: ".5rem" }}>
<Link to="/">Home</Link>
</li>
<li style={{ marginRight: ".5rem" }}>
<Link to="/signup">Sign Up</Link>
</li>
<li>
<Link to="/signin">Sign In</Link>
</li>
</ul>
</nav>
);
}
This is our navigation for the unauthorized routes. We can only visit the Sign Up, Sign In, or Home page.
client/src/components/pages/Home.jsx
export default function HomePage() {
return <h1>Home</h1>;
}
Right now our Home page is a simple header, just to provide an example.
client/src/components/pages/SignIn.jsx
import { useStoreActions } from "easy-peasy";
import { signInWithEmailAndPassword } from "firebase/auth";
import { useState } from "react";
import { useLocation, useNavigate } from "react-router-dom";
import firebaseService from "../../services/firebase";
export default function SignInPage() {
const location = useLocation();
const navigate = useNavigate();
const [fields, setFields] = useState({
email: "",
password: ""
});
const [error, setError] = useState("");
const setAuthorized = useStoreActions((actions) => actions.setAuthorized);
const handleChange = (e) => {
setFields({ ...fields, [e.target.name]: e.target.value });
};
const handleSubmit = async (e) => {
e.preventDefault();
try {
const user = await signInWithEmailAndPassword(
firebaseService.auth,
fields.email,
fields.password
);
if (user) {
setAuthorized();
navigate("/");
console.log("Called");
}
} catch (err) {
console.log(err);
setError("Invalid email address or password.");
}
};
return (
<main>
{location.state && location.state.message ? (
<p style={{ color: "green" }}>{location.state.message}</p>
) : null}
<h1>Sign In</h1>
<form onSubmit={handleSubmit}>
<div>
<label htmlFor="email">Email Address</label>
</div>
<div>
<input
type="email"
name="email"
value={fields.email}
onChange={handleChange}
required
/>
</div>
<div style={{ marginTop: "1rem" }}>
<label htmlFor="password">Password</label>
</div>
<div>
<input
type="password"
name="password"
value={fields.password}
onChange={handleChange}
required
/>
</div>
{error ? <p style={{ color: "red" }}>Error: {error}</p> : null}
<div style={{ marginTop: "1rem" }}>
<button type="submit">Sign In</button>
</div>
</form>
</main>
);
}
On the Sign In page we have a very simple form where we are collecting the user’s email and password. When they click the button to sign in, it then fires Firebase auth function that alters the state of whether or not we are authorized, and also returns the user. And then the function navigates us away from Sign In to the / route, which should take us to our Dashboard page.
client/src/components/pages/SignUp.jsx
import { useState } from "react";
import { useNavigate } from "react-router-dom";
import axios from "axios";
export default function SignUpPage() {
const [fields, setFields] = useState({
email: "",
name: "",
password: "",
confirmPassword: ""
});
const [error, setError] = useState("");
const navigate = useNavigate();
const handleChange = (e) => {
setFields({ ...fields, [e.target.name]: e.target.value });
};
const handleSubmit = async (e) => {
e.preventDefault();
if (fields.password.length < 6) {
return setError("Password must be at least 6 characters in length.");
}
if (fields.confirmPassword !== fields.password) {
return setError("Password and confirm password must match.");
}
try {
const req = await axios.post("http://localhost:4444/api/user", {
email: fields.email,
password: fields.password,
name: fields.name
});
const message = req.data.success;
return navigate("/signin", {
replace: true,
state: {
message
}
});
} catch (err) {
const errMessage = err.response.data.error;
return setError(errMessage);
}
};
return (
<div>
<h1>Sign Up</h1>
<form onSubmit={handleSubmit}>
<div>
<label htmlFor="email">Email Address</label>
</div>
<div>
<input
type="email"
name="email"
value={fields.email}
onChange={handleChange}
required
/>
</div>
<div style={{ marginTop: "1rem" }}>
<label htmlFor="name">Name</label>
</div>
<div>
<input
type="text"
name="name"
value={fields.name}
onChange={handleChange}
required
/>
</div>
<div style={{ marginTop: "1rem" }}>
<label htmlFor="password">Password</label>
</div>
<div>
<input
type="password"
name="password"
value={fields.password}
onChange={handleChange}
required
/>
</div>
<div style={{ marginTop: "1rem" }}>
<label htmlFor="confirmPassword">Confirm Password</label>
</div>
<div>
<input
type="password"
name="confirmPassword"
value={fields.confirmPassword}
onChange={handleChange}
required
/>
</div>
{error ? <p style={{ color: "red" }}>Error: {error}</p> : null}
<div style={{ marginTop: "1rem" }}>
<button type="submit">Sign Up</button>
</div>
</form>
</div>
);
}
Our Sign Up page also contains a form that gathers information from the user. We are getting their email, their name, their password and confirming that password. After clicking Sign Up we use axios to make a post request to our API endpoint to add the new user. If there’s any errors, we handle those as well and display them on the screen for the user.
client/src/components/pages/Dashboard.jsx
import { useEffect, useState } from "react";
import firebaseService from "../../services/firebase";
import axios from "axios";
export default function DashboardPage() {
const [loadingUser, setLoadingUser] = useState(true);
const [user, setUser] = useState(null);
const getUser = async () => {
try {
const token = await firebaseService.auth.currentUser.getIdToken(true);
console.log(token);
const req = await axios.get("http://localhost:4444/api/user", {
headers: {
authorization: `Bearer ${token}`
}
});
console.log(req.data);
if (req.data) {
setUser(req.data);
setLoadingUser(false);
}
} catch (err) {
console.error(err);
}
};
useEffect(() => {
getUser();
}, []);
return (
<>
<h1>Dashboard</h1>
{loadingUser ? (
<p>Loading User</p>
) : (
<div>
<p>Name: {user.name}</p>
<p>FirebaseID: {user.firebaseId}</p>
<p>Email: {user.email}</p>
</div>
)}
</>
);
}
The final page we look at is our Dashboard, which if you remember can only be accessed while you are authorized and authenticated by Firebase. On this page we make a request to our api to get the user data and conditionally display it on the screen.
As you can see through these code examples, in a MERN stack application it’s not very difficult to integrate Firebase auth. We can use it on our Back End to protect our api routes and on our Front End to protect which pages and components we want to render to the user. We can pass our token along in the process each time we make HTTP requests. While it was out of scope for this guide, you could even integrate OAuth providers through Firebase as well, adding even more power to the arsenal. I hope these examples are useful to anyone trying to integrate with Firebase in their MERN stack application.
|
__label__pos
| 0.935862 |
Skip to content
Getting Started
Welcome to Vue Test Utils, the official testing utility library for Vue.js!
This is the documentation for Vue Test Utils v2, which targets Vue 3.
In short:
What is Vue Test Utils?
Vue Test Utils (VTU) is a set of utility functions aimed to simplify testing Vue.js components. It provides some methods to mount and interact with Vue components in an isolated manner.
Let's see an example:
js
import { mount } from '@vue/test-utils'
// The component to test
const MessageComponent = {
template: '<p>{{ msg }}</p>',
props: ['msg']
}
test('displays message', () => {
const wrapper = mount(MessageComponent, {
props: {
msg: 'Hello world'
}
})
// Assert the rendered text of the component
expect(wrapper.text()).toContain('Hello world')
})
Vue Test Utils is commonly used with a test runner. Popular test runners include:
• Vitest. Terminal based, has experimental browser UI.
• Cypress. Browser based, supports Vite, webpack.
• Playwright (experimental). Browser based, supports Vite.
• WebdriverIO. Browser based, supports Vite, Webpack, cross browser support.
Vue Test Utils is a minimal and unopinionated library. For something more featureful, ergonomic and opinionated you may want to consider Cypress Component Testing which has a hot reload development environment, or Testing Library which emphasizes accessibility based selectors when making assertions. Both of these tools use Vue Test Utils under the hood and expose the same API.
What Next?
To see Vue Test Utils in action, take the Crash Course, where we build a simple Todo app using a test-first approach.
Docs are split into two main sections:
• Essentials, to cover common use cases you'll face when testing Vue components.
• Vue Test Utils in Depth, to explore other advanced features of the library.
You can also explore the full API.
Alternatively, if you prefer to learn via video, there is a number of lectures available here.
Released under the MIT License.
|
__label__pos
| 0.899136 |
Pen is working as mouse instead
Discussion in 'Software' started by Boots, May 6, 2012.
Thread Status:
Not open for further replies.
1. Boots
Boots Pen Pal - Newbie
Messages:
74
Likes Received:
0
Trophy Points:
15
HP 2730p. I don't think I did anything differently, didn't download anything that I know of, but suddenly the pen is basically a mouse. It's got the mouse cursor on hover instead of the small circle, and in things like OneNote and Sticky Notes it is the bar that looks like an I instead of handwriting.
I tried reinstalling the Wacom driver, and no luck. Any help is appreciated.
2. Frank
Frank Scribbler - Standard Member Senior Member
Messages:
3,847
Likes Received:
3
Trophy Points:
116
Same thing happens to me sometimes, too, but I don't know the issue. However, if I remove the Wacom driver, restart the computer, then reinstall the Wacom driver, restart the computer, everything works again, until something strange happens (maybe twice a year) which disables the pen again.
If you find the culprit, make sure you it tell me :)
3. cleft
cleft Scribbler - Standard Member
Messages:
313
Likes Received:
55
Trophy Points:
41
It's likely that one of the updates from Windows Update was the culprit. I used to face this problem until I found out that there was an update for touch driver which had caused the whole mess. I hid the update and never installed any touch driver from windows update since then, problem's gone.
4. Boots
Boots Pen Pal - Newbie
Messages:
74
Likes Received:
0
Trophy Points:
15
It's back to normal now, so that's really weird. I didn't do anything to fix it. Frank, how do you delete drivers? They don't show up in the normal Uninstall Programs window. Cleft, was it the Wacom minidriver? I hid that one, and I haven't spotted anything similar unless it's one of the ones that downloaded automatically before I could tell it not to do that. I think you're probably right, WU has been trouble a couple of times already, though there's no touch software in the recent downloads. Maybe it was the Intel update?
5. Frank
Frank Scribbler - Standard Member Senior Member
Messages:
3,847
Likes Received:
3
Trophy Points:
116
You said you installed the Wacom drivers, so you should find it in the control panel in add/remove software. Something like pen driver or Wacom ..., or touch or something else (I can't tell you the name because I use a german Windows version). If not then you haven't installed some Wacom drivers
Because I had issues with Photoshop lately I installed the latest Wacom tablet PC drivers, works flawless on the T2010 Win 7 x64.
6. cleft
cleft Scribbler - Standard Member
Messages:
313
Likes Received:
55
Trophy Points:
41
The touch driver might have been installed. Also, I'm not sure about the wacom minidriver, I think it's safe to leave it alone.
Loading...
Similar Threads - working mouse instead
1. desertlap
Replies:
0
Views:
706
Thread Status:
Not open for further replies.
Share This Page
|
__label__pos
| 0.809613 |
Take the 2-minute tour ×
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.
I'm a Python beginner. I need to see if a date has more than X days. How can I do this in Python?
I have tested something like:
if datetime.date(2010, 1, 12) > datetime.timedelta(3):
I got the error:
TypeError: can't compare datetime.date to datetime.timedelta
Any clue on how to achieve this?
Best Regards,
share|improve this question
2
"I need to see if a date have more than X days." What does that mean? That the day of month is greater than X? To get the day of month for a date, use the day attribute. if thedate.day > X: – codeape Oct 24 '11 at 8:45
2 Answers 2
up vote 4 down vote accepted
You can't compare a datetime to a timedelta. A timedelta represents a duration, a datetime represents a specific point in time. The difference of two datetimes is a timedelta. Datetimes are comparable with each other, as are timedeltas.
You have 2 options:
• Subtract another datetime from the one you've given, and compare the resulting timedelta with the timedelta you've also given.
• Convert the timedelta to a datetime by adding or subtracting it to another datetime, and then compare the resulting datetime with the datetime you've given.
share|improve this answer
Comparing apples and oranges is always very hard! You are trying to compare "January 12, 2010" (a fixed point in time) with "3 hours" (a duration). There is no sense in this.
If what you are asking is "does my datetime fall after the nth day of the month" then you can do :
my_important_date = datetime.now()
if my_important_date.day > n:
pass #do you important things
share|improve this answer
Your Answer
discard
By posting your answer, you agree to the privacy policy and terms of service.
Not the answer you're looking for? Browse other questions tagged or ask your own question.
|
__label__pos
| 0.983634 |
66,961 questions
57,942 answers
747 comments
13,311 users
MathHomeworkAnswers.org is a free math help site for student, teachers and math enthusiasts. Ask and answer math questions in algebra I, algebra II, geometry, trigonometry, calculus, statistics, word problems and more. Register for free and earn points for questions, answers and posts. Math help is always 100% free.
Note: This site is intended to help students and non-students understand and practice math. we do not condone cheating. Users attempting to abuse the system will be permanently blocked from further use.
Most popular tags
algebra problems solving equations word problems calculating percentages geometry problems calculus problems fraction problems math trigonometry problems simplifying expressions solve for x rounding numbers order of operations pre algebra problems evaluate the expression slope intercept form algebra factoring probability math problem polynomials statistics problems please help me to answer this step by step. how to find y intercept algebra 2 problems solving inequalities equation of a line solving systems of equations by substitution logarithmic equations dividing fractions sequences and series help please answer this queastion as soon as possible. thank you :) word problem greatest common factor graphing linear equations geometric shapes square roots substitution method least common multiple factoring polynomials 6th grade math solving systems of equations solving equations with fractions long division http: mathhomeworkanswers.org ask# function of x plz. give this answer as soon as possible standard form of an equation ratio and proportion trig identity proving trigonometric identities solving equations with variables on both sides algebra problem least to greatest dividing decimals solving systems of equations by elimination slope of a line through 2 points precalculus problems domain of a function college algebra help me trinomial factoring algebraic expressions distributive property factors of a number perimeter of a rectangle solving quadratic equations slope of a line i need help with this fraction word problems help me!! equivalent fractions 8th grade math limit of a function differentiation how to find x intercept exponents division algebra 1 hw help asap area of a triangle geometry 10th grade elimination method simplifying fractions . inverse function differential equation greater than or less than integral area of a circle 7th grade math simplify geometry parallel lines standard deviation solving linear equations mixed numbers to improper fractions width of a rectangle solving triangles circumference of a circle number of sides of a polygon scientific notation problems percentages fractions lowest common denominator zeros of a function diameter of a circle solving systems of equations by graphing systems of equations containing three variables dividing polynomials prime factorization length of a rectangle story problems place value derivative of a function quadratic functions algebra word problems area of a rectangle mathematical proofs ( vertex of a parabola converting fractions to decimals calculus 5th grade math evaluating functions integers homework equation algebra 1 calculators least common denominator solve for y range of a function combining like terms radius of a circle greatest to least perpendicular lines finding the nth term unit conversion algebra 2 slope ) ordered pairs solving radical equations area word problems calculus problem calculate distance between two points common denominator functions multiplying fractions complex numbers because i don't understand set builder notation binomial expansion percents geometry word problems equation of a tangent line what is the answers? midpoint of a line show work simplifying radicals #math product of two consecutive numbers adding fractions absolute value ratios median help me please and show how to work it out round to the nearest tenth graphing functions 4th grade math solve graphing divisibility rules radicals statistics 1 () show every step to solve this problem factor by grouping significant figures math homework ? improper fractions to mixed numbers roots of polynomials volume of a cylinder subtracting fractions - derivatives pre-algebra problems how to complete the square multiplying polynomials percentage numbers http: mathhomeworkanswers.org ask?cat=# number patterns mixed numbers average rate of change pemdas integration please help solving quadratic equations by completing the square surface area of a prism simultaneous equations logarithms decimals http: mathhomeworkanswers.org ask# rounding decimals (explain this to me) solving equations with variables = perimeter of a triangle surface area of a cube implicit differentiation algebra1 maths rational irrational numbers place values reducing frations to lowest terms solving trigonometric equations matrices need help lcm dividing how do you solve this problem in distributive property compound interest geometry problem rounding to the nearest cent writing in decimal form direct variation height of a triangle 9th grade math solving equations by factoring answer factor divide decimal to fraction subtracting mixed numbers mean angles problems solve algebra equation arithmetic sequences simplifying trigonometric equation using identities comparing decimals laplace transform sets #help expanded forms pls. help!!
Find f 2, f 3, f 4, and f 5 if f is defined recursively by f 0 = 2, f 1 = 1 and for n = 1,2, .. f n+1 = 2f n - f n-1
Find f 2, f 3, f 4, and f 5 if f is defined recursively by f 0 = 2, f 1 = 1
and for n = 1,2, ..
n+1 = 2f n - f n-1
asked Aug 13, 2011 in Calculus Answers by anonymous
Your answer
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
To avoid this verification in future, please log in or register.
1 Answer
2,1,0,-1,-2, -3,-4, ...
answered Aug 23, 2011 by wbechem Level 5 User (12,920 points)
Related questions
1 answer 30 views
1 answer 854 views
1 answer 21 views
21 views asked Nov 26, 2013 in Calculus Answers by asish george
1 answer 1,000 views
1 answer 80 views
0 answers 67 views
1 answer 46 views
46 views asked May 29, 2013 in Algebra 2 Answers by anonymous
1 answer 70 views
1 answer 106 views
0 answers 50 views
50 views asked Mar 19, 2013 in Algebra 1 Answers by anonymous
1 answer 18 views
1 answer 230 views
230 views asked Nov 2, 2012 in Algebra 1 Answers by anonymous
0 answers 178 views
0 answers 148 views
1 answer 19 views
0 answers 78 views
0 answers 18 views
1 answer 84 views
0 answers 15 views
15 views asked Apr 17, 2013 in Algebra 1 Answers by anonymous
1 answer 148 views
148 views asked Mar 28, 2013 in Algebra 1 Answers by anonymous
0 answers 190 views
190 views asked Mar 6, 2013 in Algebra 1 Answers by anonymous
0 answers 201 views
201 views asked Jan 11, 2013 in Algebra 2 Answers by anonymous
1 answer 359 views
359 views asked Sep 19, 2011 in Calculus Answers by anonymous
...
|
__label__pos
| 0.861358 |
Skip to main content
Notice
Please note that most of the software linked on this forum is likely to be safe to use. If you are unsure, feel free to ask in the relevant topics, or send a private message to an administrator or moderator. To help curb the problems of false positives, or in the event that you do find actual malware, you can contribute through the article linked here.
Topic: Quite OK Audio (QOA)... anyone ? (Read 6184 times) previous topic - next topic
0 Members and 1 Guest are viewing this topic.
Quite OK Audio (QOA)... anyone ?
Just discovered this new "fast, lossy audio compression" format that claims:
Quote
QOA is fast. It decodes audio 3x faster than Ogg-Vorbis, while offering better quality and compression (278 kbits/s for 44khz stereo) than ADPCM.
QOA is simple. The reference en-/decoder fits in about 400 lines of C. The file format specification is… not yet released.
They provide online samples to evaluate it: https://qoaformat.org/samples/
Official blog: https://phoboslab.org/
Official website: https://qoaformat.org/
Official GIT: https://github.com/phoboslab/qoa
F.O.R.A.R.T. npo
Re: Quite OK Audio (QOA)... anyone ?
Reply #1
I'd wonder how this compares to, say, AptX. or the elephant in the room, mp3?
(And FWIW, given that an ancient chip like, say, dual core ARMv4 at 100MHz can decode ogg vorbis at multiple times realtime, I'm not entirely sure if there's a use case for this if its only benefit is "It's fast" )
Re: Quite OK Audio (QOA)... anyone ?
Reply #2
If anyone wishes to give this a try: https://www.rarewares.org/files/QOA.zip
This contains qoaconv.exe, the encoder, and qoaplay.exe, the player. These are Windows x64 compiles and the input to the encoder is only compiled for .wav files.
Command line to encode: quoconv in.wav out.qoa
and, to play: qoaplay file.qoa
Tested on a couple of tracks and I have to say I have heard a lot worse!! ;)
Re: Quite OK Audio (QOA)... anyone ?
Reply #4
I'd be interested to see someone better at this and more patient than me ABX this.
If this is doing what I assume this is, when it isn't transparent it should be less annoying than something like mp3 getting it wrong, but how often do modern lossy codecs get it annoyingly wrong in the general vicinity of 256 kbps
Re: Quite OK Audio (QOA)... anyone ?
Reply #6
I'd wonder how this compares to, say, AptX. or the elephant in the room, mp3?
(And FWIW, given that an ancient chip like, say, dual core ARMv4 at 100MHz can decode ogg vorbis at multiple times realtime, I'm not entirely sure if there's a use case for this if its only benefit is "It's fast" )
Intended use cases seem to include audio in games, including music, sound effects, where ADPCM formats have been used, and other applications where the computation savings would count, I guess.
Doesn't seem to be meant to compete with more traditional lossy codecs for applications where only one or just a few concurrent streams are meant to be used.
https://phoboslab.org/log/2023/02/qoa-time-domain-audio-compression
Re: Quite OK Audio (QOA)... anyone ?
Reply #7
The same guy invented QOI, a simple lossless image codec. In that case he was competitive with PNG on size and much quicker, a lot of that is thanks to PNG being archaic. QOA is likely uncompetitive with complex audio codecs, but has a fighting chance of being competitive with quick codecs. It'll be interesting how they fare creating a lossy codec.
From the source:
Code: [Select]
/* The Least Mean Squares Filter is the heart of QOA. It predicts the next
sample based on the previous 4 reconstructed samples. It does so by continuously
adjusting 4 weights based on the residual of the previous prediction.
The next sample is predicted as the sum of (weight[i] * history[i]).
The adjustment of the weights is done with a "Sign-Sign-LMS" that adds or
subtracts the residual to each weight, based on the corresponding sample from
the history. This, surprisingly, is sufficient to get worthwhile predictions.
This is all done with fixed point integers. Hence the right-shifts when updating
the weights and calculating the prediction. */
Re: Quite OK Audio (QOA)... anyone ?
Reply #8
QOA specification is still not frozen last time I checked.
Re: Quite OK Audio (QOA)... anyone ?
Reply #9
Out of curiosity, I compared this against a 32kHz-downsampled, FLAC compliant "FSLAC -2" encoding (using this preliminary 1.3.4 binary), which results in a similar bitrate. The reason for the comparison against FLAC is that, as ktf demonstrated in his lossless codec analysis, the FLAC reference software is extremely fast at low and medium presets as well.
Due to an apparent lack of psychoacoustic noise shaping in QOA (the quantization noise is spectrally almost white) and high efficiency (due to the extremely simple compression algorithm), FSLAC sounds quite a bit better to my ears, and so does LossyWAV+FLAC, I would assume. Especially on samples such as "Triangle", see the FSLAC thread here.
Is there any other feature in QOA that F(S)LAC doesn't offer?
Chris
If I don't reply to your reply, it means I agree with you.
Re: Quite OK Audio (QOA)... anyone ?
Reply #10
Other than up to 255 channels and guarantees about footprint and consistency, no. Flac is very fast but qoa is so simple that it should be an order of magnitude faster when optimised, if IO allows. The reference qoa decoder processes one sample at a time which can probably be improved without using SIMD and there may also be other speedups from where it stands.
Re: Quite OK Audio (QOA)... anyone ?
Reply #11
Flac is very fast but qoa is so simple that it should be an order of magnitude faster when optimised, if IO allows.
I fail to see how this algorithm is much simpler than FLAC's. I haven't looked at this in detail, but having weights being updated each sample is usually something detrimental to SIMD optimizations.
Music: sounds arranged such that they construct feelings.
Re: Quite OK Audio (QOA)... anyone ?
Reply #12
I'm noodling trying to do multiple sample decodes at once (not full SIMD but packing into uint64_t), the residual is easy to unpack 4 at a time like that but haven't figured out the predictor yet. You may be right that the predictor cannot really be SIMD per channel, it definitely could be by decoding a single sample from every channel at once ("subchannels" are interleaved) but that involves more spread out memory access which may need twiddling and defeat the purpose, and limited benefit as most input is likely 2 channel. There's no stereo decorrelation which helps. FWIW the weights for a channel fit in a uint64_t, so does the history (which the ref updates separately to the output, but it looks like the output could be used directly which may or may not be a benefit).
What is a lot simpler are the memory accesses, they're fixed and so is the structure of the data. If we're really lucky a few common channel counts could be auto-vectorised but I don't have much faith in that. Order of magnitude may be pushing it, currently the ref takes half the user time to decode as flac -8 no md5 which admittedly may not be a fair fight.
Re: Quite OK Audio (QOA)... anyone ?
Reply #13
I haven't looked at this in detail, but having weights being updated each sample is usually something detrimental to SIMD optimizations.
Updating weights each frame, not each sample, sounds promising.
Re: Quite OK Audio (QOA)... anyone ?
Reply #14
Possible improvements include:
• Adding noise shaping
• Use vector quantization, like E8 lattice, or PVQ to minimize the average root-mean-square error.
• Use QMF (Quadrature mirror filter) to split the input into subbands, and encode each subbands separately, like aptx do.
Re: Quite OK Audio (QOA)... anyone ?
Reply #15
Author of QOA here. Cool to see that it transpired to this forum - and that it's not met with immediate disgust!
To address some questions/remarks:
SIMD: yes, the algorithm doesn't seem to be very friendly to SIMD optimizations. I tried writing some intrinsics and only made it slower than my compiler's -O3. The problem is manly that the prediction needs a horizontal accumulation of all 4 weights * history and these are bog slow on x86. Updating the LMS state every sample in itself isn't too bad. On my aging i7-6700k decoding sits at around 3500x realtime (one thread). I'm still looking for ways to make it faster.
Noise shaping: there's an experimental branch where I implemented some very simplistic noise shaping. code here. I made a comparison page with all test samples with and without this noise shaping: https://phoboslab.org/files/qoa-samples/noiseshaping.html – The difference in 32_triangles-triangle_roll_stereo is night and day. Though I have a hunch that this noise shaping hurts some other samples. E.g. the the voice in julien_baker_sprained_ankle. But I'm not sure if I'm not making this up. My ears (and/or my equipment) are not the best. Feedback welcome!
QMF: I actually tried that and it didn't make a difference, but made the code much more complex. So, not terribly exited about that.
E8 lattice, or PVQ: I guess I have some reading to do...
Re: Quite OK Audio (QOA)... anyone ?
Reply #16
Very interesting improvements. I will follow this thread for sure!
Re: Quite OK Audio (QOA)... anyone ?
Reply #17
I have been interested in time-domain lossy audio compression for a long time, and this is a very cool idea and implementation! As mentioned in the blog there are two obvious competitors. On the simpler side there is 4.0 bps ADPCM and on the more complex side there is 2.5 bps WavPack lossy. I have done experimentation in the past and found that those two are roughly equivalent quality-wise, and I have successfully used both of them in embedded canned audio applications. I suspected from the blog description that this codec, at 3.2 bps, would fit right in between.
I ran some quick tests using a 44.1 kHz stereo music sample and a 16 kHz mono voice sample. For ADPCM I used my ADPCM-XQ encoder at the highest quality setting my patience would allow and for WavPack lossy I used the default mode with -x6. Both my ADPCM encoder and WavPack use dynamic noise shaping, so to make the comparison valid I turned that off (and verified that all three codecs generated flat noise).
In short, the results were exactly as I expected with all three codecs generating similar noise levels, within a dB or so. Of course, the encoding speed of ADPCM and WavPack were much slower than QOA, but that's irrelevant for canned audio. The decoding speed was too fast to measure with these samples and setup, but I imagine that they would line up according to complexity on embedded systems. Not sure where FLAC would fit in, but probably close to QOA.
In addition to noise shaping, which has been discussed, there is one other low-hanging quality improvement to consider: mid-side encoding (sometimes called joint stereo). The advantage that this can bring to this kind of lossy encoding is not obvious nor easy to measure, but in cases where there is a significant amount of centered audio (e.g., a lead vocal) then by using mid-side encoding most of the quantization noise will also be centered spatially, which makes it more easily masked by the source. Of course in cases where the left and right channels are completely different, then left-right encoding is better, so there has to be some sort of heuristic to choose. This is obviously impossible with ADPCM without breaking existing decoders, but WavPack lossy uses this by default, and in all the -x modes switches it in and out dynamically.
Re: Quite OK Audio (QOA)... anyone ?
Reply #18
Having a viable alternative to proprietary ADPCM codecs seems like a worthy goal. The initial QOA blog post mentions video game applications, but as far as I can tell, QOA is missing one key feature for this purpose -- looping. As an example, CRI's ADX has three playback modes:
• No loop (play the whole file once)
• Loop all (upon reaching the final sample, continue playback from the first sample)
• Loop specific (upon reaching end sample y, continue playback from first sample x)
Method 2 requires you to cut the audio's start and end points ahead of time to ensure a seamless loop. Method 3 is required if the track has an introduction that is not repeated within the loop. In order to implement this in QOA, you'd need to allocate some space in the header for a loop type flag (0-2), as well as to store the starting and ending sample/block values.
ADX generally ignores the end loop position and treats the end of the file as the loop end, and then only uses the loop start position for the beginning of the loop. Since ADX was designed for CD-based games, it also requires that the loop start position lie on a CD sector boundary, i.e. you can only loop back to a sample that lies at the beginning of a 2048-byte CD sector. You probably don't need to replicate such a restriction in QOA, since most games no longer use optical media, but it might be useful if people are using QOA for homebrew games on older platforms. Built-in looping support would be a big selling point for using QOA in games, though.
Re: Quite OK Audio (QOA)... anyone ?
Reply #19
Thanks bryant! Your "this is a very cool idea" comment means a lot me :]
Mid-side encoding: it somehow never occured to me that this could improve quality. I always thought of it as a way to allow quantizing one channel even more (which, now that I write this, makes it obvious that the quality would increase if you don't quantize more). Would be cool to allow that on a per frame basis, but I'm not sure if the added complexity would be worth it for the intended use cases. I'm trying to keep it really simple.
Looping: maybe I'm not understanding the issue, but I fail to see why the file format needs to support this. Ultimately it's the application's choice how and where to loop a file; and this info should imho be provided out of band.
I also just finished a new draft of the file format specification:
https://qoaformat.org/qoa-specification-draft-02.pdf
Re: Quite OK Audio (QOA)... anyone ?
Reply #21
That plugin is not mine. I sadly had to stop using foobar when I switched to linux a few years ago.
The foobar plugin is being developed by pfusik, here: https://github.com/pfusik/qoa-ci - maybe open an issue there. Is there any documentation of how (if at all) the plugin API changed for v2? If there's only minor changes, it may not be a all too difficult.
Re: Quite OK Audio (QOA)... anyone ?
Reply #22
Looping: maybe I'm not understanding the issue, but I fail to see why the file format needs to support this. Ultimately it's the application's choice how and where to loop a file; and this info should imho be provided out of band.
The problem is that the loop start position will be different for every track, so unless the file format has a place to store the start sample number as metadata, you'd have to hardcode a table containing the start sample number for each track somewhere in an external file, which makes less sense than storing the metadata as part of the file itself, so that changes to the music do not require updating a separate file.
If QOA supports tags like ID3 or Vorbis comments, then you could just as easily store the start sample in a tag field, instead of the file header. I didn't see any mention of which (if any) tag format QOA uses, though. ADX just uses the file header, since it doesn't officially support any kind of tagging.
Re: Quite OK Audio (QOA)... anyone ?
Reply #23
That plugin is not mine. I sadly had to stop using foobar when I switched to linux a few years ago.
The foobar plugin is being developed by pfusik, here: https://github.com/pfusik/qoa-ci - maybe open an issue there. Is there any documentation of how (if at all) the plugin API changed for v2? If there's only minor changes, it may not be a all too difficult.
Will make a request over there then. Thnx
Re: Quite OK Audio (QOA)... anyone ?
Reply #24
I also just finished a new draft of the file format specification:
https://qoaformat.org/qoa-specification-draft-02.pdf
I see that you follow FLAC's channel order and allocation, where 4 resp. 5 channels are 4.0 and 5.0, and not 3.1 resp. 4.1.
I don't know what is actually the best, but you should think twice. The WAVEFORMATEXTENSIBLE channel order has LFE as channel four. And BL as channel five, so deciding that "an N channel signal MUST BE the first N in WAVEFORMATEXTENSIBLE" is not appropriate, at least not unless you can code one as "not present" for each block.
At https://xiph.org/flac/format.html there has "since forever" been a loose mention of SMPTE/ITU-R recommendations that aren't referenced properly - and also, those are superseded over time. Revision 9 of ITU-R BS.2159 is here: https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BS.2159-9-2022-PDF-E.pdf . You see channel orders.
I have a hunch that no "standard" ever did prescribe FLAC's allocation - only the order among the channels that are actually included. At least it seems to be that way by now. Apparently, DVD-Audio can accommodate four channels as FL FR + any among the following four: (FC LFE), (LFE BC), (FC BC) or (BL BR).
So ... careful. Which means you might want to consider whether
uint8_t num_channels; // no. of channels
should be something else.
|
__label__pos
| 0.592176 |
Home > Python, Routing > Learning Python: Week3 (Conditionals and For Loops) -Part 4
Learning Python: Week3 (Conditionals and For Loops) -Part 4
As discussed in post ( https://crazyrouters.wordpress.com/2017/02/25/learning-python-kirk-byers-python-course/ ) , i will be sharing the my learning on weekly basis as course continues. This will not only motivate me but also help others who are in phase of learning python 3.
This post will focus on Week 3 (Conditionals and For Loops) .This post will focus on exercise 3
##################### EXERCISE ########################
IV. Create a script that checks the validity of an IP address. The IP address should be supplied on the command line.
A. Check that the IP address contains 4 octets.
B. The first octet must be between 1 – 223.
C. The first octet cannot be 127.
D. The IP address cannot be in the 169.254.X.X address space.
E. The last three octets must range between 0 – 255.
For output, print the IP and whether it is valid or not.
#############END ########
In this exercise, user will enter IP address on the command line and will be checked for valid IP address depending over the conditions.
So let’s start with code to get the IP address on the command line from user using sys.argv.
As discussed in last exercise ,if the input is more than 2 argument, it will throw output as “Error Made”
import sys
if len(sys.argv) == 2:
ip_addr = sys.argv.pop()
print("The IP address is :",ip_addr)
else:
print("Error Made")
If we run the above code , we will get the output as below
C:\Users\609807949\Documents\Personal\Python\kirk\week 3>py test1.py 10.10.10.1
The IP address is : 10.10.10.1
if more than 2 argument entered by user
C:\Users\609807949\Documents\Personal\Python\kirk\week 3>py test1.py 10.10.10.1
20.20.20.1
Error Made
So we have got the input as Ip address on the command line from user. Let’s split each octet of ip address using split () method
ip_addr_new = ip_addr.split('.')
We have used the Nested If else loop to check all the below required conditions
A. Check that the IP address contains 4 octets.
B. The first octet must be between 1 – 223.
C. The first octet cannot be 127.
D. The IP address cannot be in the 169.254.X.X address space.
E. The last three octets must range between 0 – 255.
if (len(ip_addr_new)) == 4:
if (int(ip_addr_new[0]) &gt; 1 and int(ip_addr_new[0]) &lt; 223 and int(ip_addr_new[0]) != 127):
if (int(ip_addr_new[0]) != 169 and int(ip_addr_new[1]) != 254):
if (int(ip_addr_new[1]) &gt;0 and int(ip_addr_new[1])&lt; 255 and int(ip_addr_new[2]) &gt;0 and int(ip_addr_new[2]) &lt; 255 and int(ip_addr_new[3]) &gt;0 and int(ip_addr_new[3])&lt; 255 ):
print("Ip address is valid")
else:
print("Ip address is Invalid")
else:
print("Ip address is Invalid")
else:
print("Ip address is Invalid")
else:
print("Ip address is Invalid")
Following code Checks for condition A that the IP address contains 4 octets.
if (len(ip_addr_new)) == 4:
Further belowcode Checks for condition B and C the first octet must be between 1 – 223 and first octet cannot be 127.
if (int(ip_addr_new[0]) > 1 and int(ip_addr_new[0]) < 223 and int(ip_addr_new[0]) != 127) :
Code to Check for condition D that the IP address cannot be in the 169.254.X.X address space.
if(int(ip_addr_new[0]) != 169 and int(ip_addr_new[1]) != 254):
Now remains the last condition E that the last three octets must range between 0 – 255.
if (int(ip_addr_new[1]) >0 and int(ip_addr_new[1])< 255 and int(ip_addr_new[2]) >0 and int(ip_addr_new[2]) < 255 and int(ip_addr_new[3]) >0 and int(ip_addr_new[3])< 255):
So we are done with all the required conditions, if any above mentioned condition fails , we should get output as “Error” otherwise output as “Valid IP”
Here is the Code from scratch for this exercise.
exercise3.PNG
Let’s check for each condition by providing valid and invalid input.
exercise3_out.PNG
Method 2
The above code is not concise , lets have better code for same problem
Let’s start from scratch ,
import sys
if len(sys.argv) != 2:
sys.exit("Usage: ./scriptarg2.py ")
ip_add = sys.argv.pop()
As discussed earlier, It will exit the script , if argument is not equal to 2 , further userlast input will be pop into ip_add
Lets define valid_ip as true , we will using for genuine ip address.
valid_ip =True
valid_ip =True
As user input is in decimal format , we need to split each octet
octets = ip_add.split('.')
Now lets check condition A i.e length of octet should be 4.
if (len(octets)) != 4:
sys.exit("The number of octet is invalid: ")
we will use for loop to get each octet and store them in different variable , also changing the type of each element into int as we will be performing checks on basis of integer
for i , octet in enumerate(octets):
try:
octets[i] = int(octet)
except ValueError:
sys.exit("\n\nInvalid IP address: {} \n".format(ip_add))
first_octet, second_octet, third_octet, fourth_octet = octets
Now task remains to check all the required conditions for input to be valid Ip address.
First checked the valid condition for first octet.
if first_octet < 1:
valid_ip = False
elif first_octet > 223:
valid_ip = False
elif first_octet == 127:
valid_ip = False
Below code checks the condition that the IP address cannot be in the 169.254.X.X address space.
if first_octet == 169 and second_octet == 254:
valid_ip = False
Now remains the last condition that the last three octets must range between 0 – 255.
for octet in (second_octet, third_octet, fourth_octet):
if (octet < 0) or (octet > 255):
valid_ip = False
Lets print whether the provided IP address is valid or not
if valid_ip:
print ("\n\nThe IP address is valid:{}".format(ip_add))
else:
sys.exit("\n\nInvalid IP address: {}".format(ip_add))
Overall Code
import sys
if len(sys.argv) != 2:
sys.exit("Usage: ./scriptarg2.py ")
ip_add = sys.argv.pop()
valid_ip =True
octets = ip_add.split('.')
if (len(octets)) != 4:
sys.exit("The number of octet is invalid: ")
for i , octet in enumerate(octets):
try:
octets[i] = int(octet)
except ValueError:
sys.exit("\n\nInvalid IP address: {} \n".format(ip_add))
first_octet, second_octet, third_octet, fourth_octet = octets
if first_octet < 1:
valid_ip = False
elif first_octet > 223:
valid_ip = False
elif first_octet == 127:
valid_ip = False
if first_octet == 169 and second_octet == 254:
valid_ip = False
for octet in (second_octet, third_octet, fourth_octet):
if (octet < 0) or (octet > 255):
valid_ip = False
if valid_ip:
print ("\n\nThe IP address is valid:{}".format(ip_add))
else:
sys.exit("\n\nInvalid IP address: {}".format(ip_add))
So done with this exercise , will be back with new post .
smiles 🙂
Advertisements
1. No comments yet.
1. No trackbacks yet.
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Google+ photo
You are commenting using your Google+ account. Log Out / Change )
Connecting to %s
%d bloggers like this:
|
__label__pos
| 0.863585 |
Awesome Open Source
Awesome Open Source
SwiftyTesseract
SPM compatible swift-version platforms
GitHub Workflow Status Examples Status Linux ARM Build Badge
Table of Contents
Version Compatibility
SwiftyTesseract Version Platforms Supported Swift Version
4.x.x iOS macOS Linux 5.3
3.x.x iOS 5.0 - 5.2
2.x.x iOS 4.2
1.x.x iOS 4.0 - 4.1
Develop
Develop should be considered unstable and API breaking changes could happen at any time. If you need to utilize some changes contained in develop, adding the specific commit is highly recommended:
.package(
url: "https://github.com/SwiftyTesseract/SwiftyTesseract.git",
// This is just an example of a commit hash, do not just copy and paste this into your Package.swift
.revision("0e0c6aca147add5d5750ecb7810837ef4fd10fc2")
)
SwiftyTesseract 3.x.x Support
4.0.0 contains a lot of major breaking changes and there have been issues when migrating from Xcode 11 to 12 with versions 3.x.x. The support/3.x.x branch has been created to be able to address any issues for those who are unable or unwilling to migrate to the latest version. This branch is only to support blocking issues and will not see any new features.
Support for Cocoapods and Carthage Dropped
As the Swift Package Manager improves year over year, I have been decided to take advantage of binary Swift Packages that were announced during WWDC 2020 to eliminate having the dependency files being built ad-hoc and served out of the main source repo. This also has the benefit for being able to support other platforms via Swift Package Manager like Linux because the project itself is no longer dependent on Tesseract being vendored out of the source repository. While I understand this may cause some churn with existing projects that rely on SwiftyTesseract as a dependency, Apple platforms themselves have their own first-party OCR support through the Vision APIs.
SwiftyTesseract class renamed to Tesseract
The SwiftyTesseract class name felt a bit verbose and is more descriptive of the project than the class itself. To disambiguate between Google's Tesseract project and SwiftyTesseract's Tesseract class, all mentions of the class will be displayed as a code snippet: Tesseract.
Using SwiftyTesseract in Your Project
Import the module
import SwiftyTesseract
There are two ways to quickly instantiate SwiftyTesseract without altering the default values. With one language:
let tesseract = Tesseract(language: .english)
Or with multiple languages:
let tesseract = Tesseract(languages: [.english, .french, .italian])
Performing OCR
Platform Agnostic
Pass an instance of Data derived from an image to performOCR(on:)
let imageData = try Data(contentsOf: urlOfYourImage)
let result: Result<String, Tesseract.Error> = tesseract.performOCR(on: imageData)
Combine
Pass an instance of Data derived from an image to performOCRPublisher(on:)
let imageData = try Data(contentsOf: urlOfYourImage)
let result: AnyPublisher<String, Tesseract.Error> = tesseract.performOCRPublisher(on: imageData)
UIKit
Pass a UIImage to the performOCR(on:) or performOCRPublisher(on:) methods:
let image = UIImage(named: "someImageWithText.jpg")!
let result: Result<String, Error> = tesseract.performOCR(on: image)
let publisher: AnyPublisher<String, Error> = tesseract.performOCRPublisher(on: image)
AppKit
Pass a NSImage to the performOCR(on:) or performOCRPublisher(on:) methods:
let image = NSImage(named: "someImageWithText.jpg")!
let result: Result<String, Error> = tesseract.performOCR(on: image)
let publisher: AnyPublisher<String, Error> = tesseract.performOCRPublisher(on: image)
Conclusion
For people who want a synchronous call, the performOCR(on:) method provides a Result<String, Error> return value and blocks on the thread it is called on.
The performOCRPublisher(on:) publisher is available for ease of performing OCR in a background thread and receiving results on the main thread (only available on iOS 13.0+ and macOS 10.15+):
let cancellable = tesseract.performOCRPublisher(on: image)
.subscribe(on: backgroundQueue)
.receive(on: DispatchQueue.main)
.sink(
receiveCompletion: { completion in
// do something with completion
},
receiveValue: { string in
// do something with string
}
)
The publisher provided by performOCRPublisher(on:) is a cold publisher, meaning it does not perform any work until it is subscribed to.
Extensibility
The major downside to the pre-4.0.0 API was it's lack of extensibility. If a user needed to set a variable or perform an operation that existed in the Google Tesseract API but didn't exist on the SwiftyTesseract API, the only options were to fork the project or create a PR. This has been remedied by creating an extensible API for Tesseract variables and Tesseract functions.
Tesseract Variable Configuration
Starting in 4.0.0, all public instance variables of Tesseract have been removed in favor of a more extensible and declarative API:
let tesseract = Tesseract(language: .english) {
set(.disallowlist, "@#$%^&*")
set(.minimumCharacterHeight, .integer(35))
set(.preserveInterwordSpaces, .true)
}
// or
let tesseract = Tesseract(language: .english)
tesseract.configure {
set(.disallowlist, "@#$%^&*")
set(.minimumCharacterHeight, .integer(35))
set(.preserveInterwordSpaces, .true)
}
The pre-4.0.0 API looks like this:
let swiftyTesseract = SwiftyTesseract(languge: .english)
swiftyTesseract.blackList = "@#$%^&*"
swiftyTesseract.minimumCharacterHeight = 35
swiftyTesseract.preserveInterwordSpaces = true
Tesseract.Variable
Tesseract.Variable is a new struct introduced in 4.0.0. It's definition is quite simple:
extension Tesseract {
public struct Variable: RawRepresentable {
public init(rawValue: String) {
self.init(rawValue)
}
public init(_ rawValue: String) {
self.rawValue = rawValue
}
public let rawValue: String
}
}
extension Tesseract.Variable: ExpressibleByStringLiteral {
public typealias StringLiteralType = String
public init(stringLiteral value: String) {
self.init(value)
}
}
// Extensions containing the previous API variables available as members of SwiftyTesseract
public extension Tesseract.Variable {
static let allowlist: Tesseract.Variable = "tessedit_char_whitelist"
static let disallowlist: Tesseract.Variable = "tessedit_char_blacklist"
static let preserveInterwordSpaces: Tesseract.Variable = "preserve_interword_spaces"
static let minimumCharacterHeight: Tesseract.Variable = "textord_min_xheight"
static let oldCharacterHeight: Tesseract.Variable = "textord_old_xheight"
}
The problem here is that the library doesn't cover all the cases. What if you wanted to set Tesseract to only recognize numbers? You may be able to set the allowlist to only recognize numerals, but the Google Tesseract API already has a variable that does that: "classify_bln_numeric_mode".
Extending the library to make use of that variable could look something like this:
tesseract.configure {
set("classify_bln_numeric_mode", .true)
}
// Or extend Tesseract.Variable to get a clean trailing dot syntax:
// Using ExpressibleByStringLiteral conformance
extension Tesseract.Variable {
static let numericMode: Tesseract.Variable = "classify_bln_numeric_mode"
}
// Using initializer
extension Tesseract.Variable {
static let numericMode = Tesseract.Variable("classify_bln_numeric_mode")
}
tesseract.configure {
set(.numericMode, .true)
}
perform(action:)
Another issue that I've seen come up several times is "Can you impelement X Tesseract feature" as a feature request. This has the same implications as the old property-based accessors for setting Tesseract variables. The perform(action:) method allows users full access to the Tesseract API in a thread-safe manner.
This comes with one major caveat: You will be completely responsible for managing memory when dealing with the Tesseract API directly. Using the Tesseract C API means that ARC will not help you. If you use this API directly, make sure your instrument your code and check for leaks. Swift's defer functionality pairs really well with managing memory when dealing directly with C API's; check out Sources/SwiftyTesseract/Tesseract+OCR.swift for examples of using defer to release memory.
All of the library methods provided on Tesseract other than Tesseract.perform(action:) and Tesseract.configure(_:) are implemented as extensions using only Tesseract.perform(action:) to access the pointer created during initialization. To see this in action see the implementation of performOCR(on:) in Sources/SwiftyTesseract/Tesseract+OCR.swift
As an example, let's implement issue #66 using perform(action:):
import SwiftyTesseract
import libtesseract
public typealias PageSegmentationMode = TessPageSegMode
public extension PageSegmentationMode {
static let osdOnly = PSM_OSD_ONLY
static let autoOsd = PSM_AUTO_OSD
static let autoOnly = PSM_AUTO_ONLY
static let auto = PSM_AUTO
static let singleColumn = PSM_SINGLE_COLUMN
static let singleBlockVerticalText = PSM_SINGLE_BLOCK_VERT_TEXT
static let singleBlock = PSM_SINGLE_BLOCK
static let singleLine = PSM_SINGLE_LINE
static let singleWord = PSM_SINGLE_WORD
static let circleWord = PSM_CIRCLE_WORD
static let singleCharacter = PSM_SINGLE_CHAR
static let sparseText = PSM_SPARSE_TEXT
static let sparseTextOsd = PSM_SPARSE_TEXT_OSD
static let count = PSM_COUNT
}
public extension Tesseract {
var pageSegmentationMode: PageSegmentationMode {
get {
perform { tessPointer in
TessBaseAPIGetPageSegMode(tessPointer)
}
}
set {
perform { tessPointer in
TessBaseAPISetPageSegMode(tessPointer, newValue)
}
}
}
}
// usage
tesseract.pageSegmentationMode = .singleColumn
If you don't care about all of the boilerplate needed to make your call site feel "Swifty", you could implement it simply like this:
import SwiftyTesseract
import libtesseract
extension Tesseract {
var pageSegMode: TessPageSegMode {
get {
perform { tessPointer in
TessBaseAPIGetPageSegMode(tessPointer)
}
}
set {
perform { tessPointer in
TessBaseAPISetPageSegMode(tessPointer, newValue)
}
}
}
}
// usage
tesseract.pageSegMode = PSM_SINGLE_COLUMN
ConfigurationBuilder
The declarative configuration syntax is achieved by accepting a function builder with functions that have a return value of (TessBaseAPI) -> Void. Using the previous example of extending the library to set the page segmentation mode of Tesseract, you could also create a function with a return signature of (TessBaseAPI) -> Void to utilize the declarative configuration block either during initialization or through Tesseract.configure(:_):
import SwiftyTesseract
import libtesseract
func setPageSegMode(_ pageSegMode: TessPageSegMode) -> (TessBaseAPI) -> Void {
return { tessPointer in
TessBaseAPISetPageSegMode(tessPointer, pageSetMode)
}
}
let tesseract = Tesseract(language: .english) {
setPageSegMode(PSM_SINGLE_COLUMN)
}
// or post initalization
tesseract.configure {
setPageSegMode(PSM_SINGLE_COLUMN)
}
(The information for what to implement for this example was found in the Tesseract documentation)
Conclusion
The major feature of 4.0.0 is it's lack of features. The core of Tesseract is less than 130 lines of code, with the remainder of the code base implemented as extensions. I have attempted to be as unopinionated as possible while providing an API that feels right at home in Swift. Users of the library are not limited to what I have time for or what other contributors to the project are able to contribute. Now that this API is available, additions to the API surface of the library will be very selective. There should no longer be any restrictions to users of the library given the extensibility.
A Note on Initializer Defaults
The full signature of the primary Tesseract initializer is
public init Tesseract(
languages: [RecognitionLanguage],
dataSource: LanguageModelDataSource = Bundle.main,
engineMode: EngineMode = .lstmOnly,
@ConfigurationBuilder configure: () -> (TessBaseAPI) -> Void = { { _ in } }
)
The bundle parameter is required to locate the tessdata folder. This will need to be changed if Tesseract is not being implemented in your application bundle or if you are developing a Swift Package project (in this case you would need to specifiy Bundle.module, see Tests/SwiftyTesseractTests/SwiftyTesseractTests.swift for an example). The engine mode dictates the type of .traineddata files to put into your tessdata folder. .lstmOnly was chosen as a default due to the higher speed and reliability found during testing, but could potentially vary depending on the language being recognized as well as the image itself. See Which Language Training Data Should You Use? for more information on the different types of .traineddata files that can be used with SwiftyTesseract
libtesseract
Tesseract and it's dependencies are now built and distributed as an xcframework under the SwiftyTesseract/libtesseract repository for Apple platforms. Any issues regarding the build configurations for those should be raised under that repository.
Installation
Swift Package Manager is now the only supported dependency manager for bringing SwiftyTesseract into your project.
Apple Platforms
// Package.swift
// swift-tools-version:5.3
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "AwesomePackage",
platforms: [
// These are the minimum versions libtesseract supports
.macOS(.v10_13),
.iOS(.v11),
],
products: [
.library(
name: "AwesomePackage",
targets: ["AwesomePackage"]
),
],
dependencies: [
.package(url: "https://github.com/SwiftyTesseract/SwiftyTesseract.git", .upToNextMajor(from: "4.0.0"))
],
targets: [
.target(
name: "AwesomePackage",
dependencies: ["SwiftyTesseract"]
),
]
)
Linux
// Package.swift
// swift-tools-version:5.3
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "AwesomePackage",
products: [
.library(
name: "AwesomePackage",
targets: ["AwesomePackage"]
),
],
dependencies: [
.package(url: "https://github.com/SwiftyTesseract/SwiftyTesseract.git", .upToNextMajor(from: "4.0.0"))
],
targets: [
.target(
name: "AwesomePackage",
dependencies: ["SwiftyTesseract"]
),
]
)
Linux Specific System Configuration
You will need to install libtesseract-dev (must be a >= 4.1.0 release) and libleptonica-dev on the host system before running any application that has a dependency on SwiftyTesseract. For Ubuntu (or Debian based distributions) that may look like this:
apt-get install -yq libtesseract-dev libleptonica-dev
The Dockerfiles in the docker directory and Examples/VaporExample provide an example. The Ubuntu 20.04 apt repository ships with compatible versions of libtesseract-dev and libleptonica-dev. If you are building against another distribution, then you will need to research what versions of the libraries are available or how to get appropriate versions installed into your image or system.
Additional configuration
Shipping language training files as part of an application bundle
1. Download the appropriate language training files from the tessdata, tessdata_best, or tessdata_fast repositories.
2. Place your language training files into a folder on your computer named tessdata
3. Drag the folder into your project. You must enure that "Create folder references" is selected or Tesseract will not be succesfully instantiated. tessdata_folder_example
Shipping language training files as part of a Swift Package
If you choose to keep the language training data files under source control, you will want to copy your tessdata directory as a package resource:
let package = Package(
// Context omitted for brevity. The full Package.swift for this example
// can be found in Examples/VaporExample/Package.swift
targets: [
.target(
name: "App",
dependencies: [
.product(name: "Vapor", package: "vapor"),
"SwiftyTesseract"
],
// The path relative to your Target directory. In this example, the path
// relative to the source root would be Sources/App/tessdata
resources: [.copy("tessdata")],
)
]
)
If you prefer not to keep the language training data files under source control see the instructions for using a custom location below.
Custom Location
Thanks to Minitour, developers now have more flexibility in where and how the language training files are included for Tesseract to use. This may be beneficial if your application supports multiple languages but you do not want your application bundle (or git repo) to contain all the possible training files needed to perform OCR (each language training file can range from 1 MB to 15 MB). You will need to provide conformance to the following protocol:
public protocol LanguageModelDataSource {
var pathToTrainedData: String { get }
}
Then pass it to the Tesseract initializer:
let customDataSource = CustomDataSource()
let tesseract = Tesseract(
language: .english,
dataSource: customDataSource,
engineMode: .lstmOnly
)
See the testDataSourceFromFiles() test in SwiftyTesseractTests.swift (located near the end of the file) for an example on how this can be done.
Language Training Data Considerations
There are three different types of .traineddata files that can be used in Tesseract: tessdata, tessdata_best, or tessdata_fast that correspond to Tesseract EngineModes .tesseractOnly, .lstmOnly, and .tesseractLstmCombined. .tesseractOnly uses the legacy Tesseract engine and can only use language training files from the tessdata repository. During testing of SwiftyTesseract, the .tesseractOnly engine mode was found to be the least reliable. .lstmOnly uses a long short-term memory recurrent neural network to perform OCR and can use language training files from either tessdata_best, tessdata_fast, or tessdata repositories. During testing, tessdata_best was found to provide the most reliable results at the cost of speed, while tessdata_fast provided results that were comparable to tessdata (when used with .lstmOnly) and faster than both tessdata and tessdata_best. .tesseractLstmCombined can only use language files from the tessdata repository, and the results and speed seemed to be on par with tessdata_best. For most cases, .lstmOnly along with the tessdata_fast language training files will likely be the best option, but this could vary depending on the language and application of SwiftyTesseract in your project.
Custom Trained Data
The steps required are the same as the instructions provided in additional configuration. To utilize custom .traineddata files, simply use the .custom(String) case of RecognitionLanguage:
let tesseract = Tesseract(language: .custom("custom-traineddata-file-prefix"))
For example, if you wanted to use the MRZ code optimized OCRB.traineddata file provided by Exteris/tesseract-mrz, the instance of Tesseract would be created like this:
let tesseract = Tesseract(language: .custom("OCRB"))
You may also include the first party Tesseract language training files with custom training files:
let tesseract = Tesseract(languages: [.custom("OCRB"), .english])
Recognition Results
When it comes to OCR, the adage "garbage in, garbage out" applies. SwiftyTesseract is no different. The underlying Tesseract engine will process the image and return anything that it believes is text. For example, giving SwiftyTesseract this image raw_unprocessed_image yields the following:
a lot of jibbersh...
‘o 1 $ : M |
© 1 3 1; ie oI
LW 2 = o .C P It R <0f
O — £988 . 18 |
SALE + . < m m & f f |
7 Abt | | . 3 I] R I|
3 BE? | is —bB (|
* , § Be x I 3 |
...a lot more jibberish
You can see that it picked SALE out of the picture, but everything else surrounding it was still attempted to be read regardless of orientation. It is up to the individual developer to determine the appropriate way to edit and transform the image to allow SwiftyTesseract to render text in a way that yields predictable results. Originally, SwiftyTesseract was intended to be an out-of-the-box solution, however, the logic that was being added into the project made too many assumptions, nor did it seem right to force any particular implementation onto potential adoptors. SwiftyTesseractRTE provides a ready-made solution that can be implemented in a project with a few lines of code that should suit most needs and is a better place to start if the goal for your project is to get OCR into an application with little effort.
Contributions Welcome
SwiftyTesseract does not implement the full Tesseract API. Given the extensible nature of the library, you should try to implement any additions yourself. If you think those additions would be useful to everyone else as well, please open a pull request! Please see Contributing to SwiftyTesseract for the full guidelines on creating issues and opening pull requests to the project.
Documentation
Official documentation for SwiftyTesseract can be found here
Attributions
SwiftyTesseract would not be possible without the work done by the Tesseract team.
See the Attributions section in the libtesseract repo for a full list of vendored dependencies and their licenses.
Get A Weekly Email With Trending Projects For These Topics
No Spam. Unsubscribe easily at any time.
swift (7,544
ios (3,495
ocr (228
tesseract (52
optical-character-recognition (21
Find Open Source By Browsing 7,000 Topics Across 59 Categories
|
__label__pos
| 0.864215 |
How to Limit a Mailbox Size by Group Policy
Even the largest hard drives become full eventually, so if your business uses Microsoft Exchange or Small Business Server to manage company email for employees, you may want to limit the size of personal mailboxes. While most hard drives can easily store millions of text emails, advertisers, suppliers and others often send emails with high-resolution images or attached files that can quickly consume storage space. Because limiting mailbox sizes for individual users in Exchange takes a considerable amount of time, use group or system policy features in Microsoft Exchange to enable mailbox size quotas for all users.
Microsoft Exchange 2007 or 2010
1. 1.
Use an administrator username and password to log on to the Windows Server computer with Exchange or Small Business Server installed.
2. 2.
Click “Start,” then “Server Management.” Click “Advanced Management” in the console tree located in the left pane of the Server Management window.
3. 3.
Double-click “Servers,” then double-click the computer name of the Windows Server. Alternatively, enter the computer name of the server in the “Server Name” field at the top of the Servers window and press “Enter.”
4. 4.
Double-click the “First Storage Group” link. In the First Storage Group Properties window, right-click “Mailbox Store” and select “Properties” from the context menu.
5. 5.
Click the “Limits” tab. Click to enable the “Prohibit send and receive at (KB)” option. Enter the maximum storage size for mailboxes in kilobytes. For example, to limit the size of email mailboxes for all users to 500MB, enter “500000” in the “Size” field. The default maximum size value for the “Prohibit send and receive at (KB)” is 200MB.
6. 6.
Click the “Issue warning at (KB)” option. Enter a number in kilobytes in the “Size” field that is lower than the number you entered in the “Prohibit send and receive at (KB)” field. When email files in a user’s mailbox reach the specified storage amount, Exchange sends a warning message to that user’s inbox, informing him that the mailbox is nearing its storage limit, and that he should delete unneeded messages to avoid email sending and receiving restrictions if the account reaches its storage quota.
7. 7.
Click the “OK” button to save the limits changes and close the Properties window. Close the First Storage Group window, then close the Server Management window.
Exchange 2003
1. 1.
Log in to the Windows Server with an administrator username and password. Click “Start,” “All Programs,” then “Exchange System Manager.”
2. 2.
Click the “+” sign next to the “Administrative Groups” label in the Exchange System Manager window. In the expanded drop-down list, click “First Administrative Groups.”
3. 3.
Double-click “First Storage Group,” then right-click “Mailbox Store” after the new window opens, and select “Properties” from the context menu.
4. 4.
Click the “Limits” tab. Click the checkbox next to the “Prohibit send and receive at (KB)” option. Enter the storage size limit for mailboxes in kilobytes in the “Size” field. For example, to limit mailboxes to a maximum size of 250MB, enter “250000” in the “Size” field.
5. 5.
Enable the “Issue warning at (KB)” option and enter a value in kilobytes lower than the number you entered in the “Prohibit send and receive at (KB)” field.
6. 6.
Click “OK” to save the changes and close the Properties window. Close the First Storage Group and Exchange System Manager windows.
|
__label__pos
| 0.55734 |
What are Case Classes in Scala
Reading Time: 2 minutes
Case Classes Image
Case Classes in Scala
Case classes in scala are regular classes with some extra toppings. Let’s discuss why they are high in demand.
A case class with no arguments is declared as a case object rather than a case class. By default, the case object can be serialized. A Case Object is also like an object, which has more attributes than a regular Object and Case classes in Scala are classes that can be decomposed using pattern matching. So compares instances structurally using the equal method.
In addition, Scala case classes are similar to regular classes except that they are useful for modeling immutable data and for pattern matching.
After that, By default case classes have public and immutable parameters. In addition, These support pattern matching, making it easier to write logical code.
Some of the characteristics of a Scala case class are as follows:
• case class parameters are fields.
• Scala produces methods like equals(), hashcode(), and toString() automatically as part of the case class.
• case class has a copy method that is used to copy arguments and can override them.
• An instance can be created without using the ‘new‘ keyword.
• A default toString method is generated, which is helpful for debugging.
-> With apply you don’t need new & using the parameter as a field
case class CaseClassDemo(name: String) {
// Using constructor parameter as field
println(s"Hello, I am $name, How are you ?")
}
case object CaseClassDemo extends App {
val Rohan = CaseClassDemo("Rohan")
}
Output : Hello, I am Rohan, How are you ?
-> copy method & equal method
object CaseClassDemo extends App {
case class Fruits(name: String)
val firstFruit = Fruits("Orange")
val secondFruit = firstFruit.copy()
println(secondFruit.name)
println(firstFruit == secondFruit)
}
Output : Orange
true
-> toString
object CaseClassDemo extends App {
case class Fruits(name: String, age: Int)
val fruit = Fruits("Orange", 15)
val ageOfFruitInString = fruit.age.toString
println("length of converted age is : " + ageOfFruitInString.length)
}
Output : length of converted age is : 2
case class Implementation
Conclusion
Firstly, Thank you guys for making it to the end of the blog I hope you gained some knowledge on how we can create a case class in Scala. Then, we learned about some important functions of case class.
Reference
For more details please visit the official documentation by using the link: https://docs.scala-lang.org/
Written by
Pallav is working as Software Consultant at Knoldus Inc. He is a technology enthusiast and a keen learner with more than 2 years of experience. He works as a Quality Catalyst in the organization. He has a good understanding of programming languages like Java and Scala. He likes to listen to songs and playing table tennis.
|
__label__pos
| 0.995888 |
luni, 9 iunie 2014
A2. Aflaţi numerarul din doua cifre
2. Aflaţi numerele de două cifre de forma ab, ştiind că a4 + a2 = 5 · b .
Problema ne cere să aflăm numerele formate din două cifre a – cifra zecilor şi b cifra unităţilor. Aceste cifre pot avea valori de la 0 la 9. Deoarece numerele trebuie să aibă două cifre înţelegem că cifra zecilor nu poate fi egală cu 0. Observăm că 5b, din partea dreaptă, este un multiplu de 5.
Condiția ne spune că a+ a= 5 · b .
a4 + a2 = a2 (a2 +1) este un multiplu de 5.
a2 nu poate fi 5 (deoarece a este număr natural)
a2 +1 = 5;
a2 = 5-1 = 4;
a = 2
4*5 = 5 *b
b = 4
Alt caz, pentru urm[torul multiplu al lui 5,
a+1 = 10;
a2 = 10-1 = 9;
a = 3
9*10 = 5 *b
b = 18, imposibil deoarece b este o cifra si poate fi maxim 9.
Singurul număr care este soluţie a problemei este 24.
Niciun comentariu:
Trimiteți un comentariu
|
__label__pos
| 0.982367 |
PhantomDiclonius PhantomDiclonius - 1 year ago 45
Python Question
Getting error message when trying to break out of a while loop in Python
I'm trying to write code that includes the following:
1) Uses a conditional test in the while statement to stop the loop.
2) Uses an active variable to control how long the loop runs.
3) Use a break statement to exit the loop when the user enters a 'quit' value.
Here is my code:
prompt = "What is your age?"
prompt += "\nEnter 'quit' to exit: "
while True:
age = input(prompt)
age = int(age)
if age == 'quit':
break
elif age < 3:
print("Your ticket is free.")
elif 3 <= age <=12:
print("Your ticket is $10.")
elif 12 < age:
print("Your ticket is $15.")
else:
print("Please enter a valid age.")
I believe I've answered part 1 and 2 correctly but whenever I enter 'quit' or any other word to test for part 3, I get an error message that states: "ValueError: invalid literal for int() with base 10: 'quit'"
Does anyone have any suggestions of what I may be doing wrong in my code? Thank you for your time.
Answer Source
You are converting the user's input to a number before checking if that input is actually a number. Go from this:
age = input(prompt)
age = int(age)
if age == 'quit':
break
elif age < 3:
print("Your ticket is free.")
To this:
age = input(prompt)
if age == 'quit':
break
age = int(age)
if age < 3:
print("Your ticket is free.")
This will check for a request to exit before assuming that the user entered a number.
Recommended from our users: Dynamic Network Monitoring from WhatsUp Gold from IPSwitch. Free Download
|
__label__pos
| 0.996769 |
All about flooble | fun stuff | Get a free chatterbox | Free JavaScript | Avatars
perplexus dot info
Home > Algorithms
Intuitive Coins (Posted on 2005-03-29) Difficulty: 3 of 5
If you must pay an amount in coins, the "intuitive" algorithm is: pay as much as possible with the largest denomination coin, and then go on to pay the rest with the other coins. For example, if there are 25, 5 and 1 cent coins, to pay someone 32 cents, you'd first give him a 25 cents coin, then one 5 cent coin, and finally two 1 cent coins.)
However, this doesn't always end paying with as few coins as possible: if we had 25, 10 and 1 cent coins, paying 32 cents with the "intuitive" algorithm would use 8 coins, while three 10 cent coins and two 1 cent coins would be better.
We can call a set "intuitive", if the "intuitive algorithm" always pays out any amount with as few coins as possible.
The problem: give an algorithm that allows you to decide that {25,5,1} is an "intuitive" set, while {25,10,1} isn't.
See The Solution Submitted by Federico Kereki
Rating: 3.8000 (5 votes)
Comments: ( Back to comment list | You must be logged in to post comments.)
Solution There's more to it.... (solution? spoiler?) | Comment 7 of 14 |
There's more to it than simple divisibility by the preceding value. As has already been pointed out, {1, 2, 3} is intuitive, and it appears as if {1, 2, 5} is also intuitive.
I suspect that the intuitiveness is related to divisibility by the preceding number, but you also have to consider the numbers preceding the preceding number as well.
Perhaps each number needs to be checked against the number preceding it and the number following it. For instance, with {1, 10, 25} you find the smallest number that is divisible by 10 which is larger than 25 and take the number of 10's it would take to get there (in this case 30 requires 3 dimes). Then find the number of 25's and 1's to get to the same value. If you can get to the number using fewer middle values then the set is NOT intuitive.
Using this criterion, {1, 5, 25} is intuitive, because it takes 6 5's to get to 30, but it also takes a total of 6 (5x1 + 1x25) to get to the same number.
The set {1, 7, 22} is NOT intuitive because it takes 4 7's to get to 28, but it requires 7 coins (1x22 + 6x1) to get to the same number.
The set {1, 2, 3} is intuitive because it takes 2 2's to get to 4, but it also takes 2 coins (1x3 + 1x1) to get to the same number.
The set {1, 2, 5} is intuitive because it takes 3 2's to get to 6, but it only takes 2 coins (1x5 + 1x1) to get to the same number.
The set {1, 5, 7} is NOT intuitive because it only takes 2 5's to get to 10, but it takes 4 coins (1x7 + 3x1) to get to the same number.
I'll leave it as an exercie to the reader to solve larger sets using the same criterion.
Posted by Erik O. on 2005-03-30 15:47:52
Please log in:
Login:
Password:
Remember me:
Sign up! | Forgot password
Search:
Search body:
Forums (0)
Newest Problems
Random Problem
FAQ | About This Site
Site Statistics
New Comments (7)
Unsolved Problems
Top Rated Problems
This month's top
Most Commented On
Chatterbox:
Copyright © 2002 - 2018 by Animus Pactum Consulting. All rights reserved. Privacy Information
|
__label__pos
| 0.570539 |
Learn Roslyn Now: Part 9 Control Flow Analysis
Control flow analysis is used to understand the various entry and exit points within a block of code and to answer questions about reachability. If we’re analyzing a method, we might be interested in all the points at which we can return out of the method. If we’re analyzing a for-loop, we might be interested in all the places we break or continue.
We trigger control flow analysis via an extension method on the SemanticModel. This returns an instance of ControlFlowAnalysis to us that exposes the following properties:
• EntryPoints – The set of statements inside the region that are the destination of branches outside the region.
• ExitPoints – The set of statements inside a region that jump to locations outside the region.
• EndPointIsReachable – Indicates whether a region completes normally. Returns true if and only if the end of the last statement is reachable or the entire region contains no statements.
• StartPointIsReachable – Indicates whether a region can begin normally.
• ReturnStatements – The set of returns statements within a region.
• Succeeded – Returns true if and only if analysis was successful. Analysis can fail if the region does not properly span a single expression, a single statement, or a contiguous series of statements within the enclosing block.
Basic usage of the API:
var tree = CSharpSyntaxTree.ParseText(@"
class C
{
void M()
{
for (int i = 0; i < 10; i++)
{
if (i == 3)
continue;
if (i == 8)
break;
}
}
}
");
var Mscorlib = PortableExecutableReference.CreateFromAssembly(typeof(object).Assembly);
var compilation = CSharpCompilation.Create("MyCompilation",
syntaxTrees: new[] { tree }, references: new[] { Mscorlib });
var model = compilation.GetSemanticModel(tree);
var firstFor = tree.GetRoot().DescendantNodes().OfType<ForStatementSyntax>().Single();
ControlFlowAnalysis result = model.AnalyzeControlFlow(firstFor.Statement);
Console.WriteLine(result.Succeeded); //True
Console.WriteLine(result.ExitPoints.Count()); //2 – continue, and break
Alternatively, we can specify two statements and analyze the statements between the two. The following example demonstrates this and the usage of EntryPoints:
var tree = CSharpSyntaxTree.ParseText(@"
class C
{
void M(int x)
{
L1: ; // 1
if (x == 0) goto L1; //firstIf
if (x == 1) goto L2;
if (x == 3) goto L3;
L3: ; //label3
L2: ; // 2
if(x == 4) goto L3;
}
}
");
var Mscorlib = PortableExecutableReference.CreateFromAssembly(typeof(object).Assembly);
var compilation = CSharpCompilation.Create("MyCompilation",
syntaxTrees: new[] { tree }, references: new[] { Mscorlib });
var model = compilation.GetSemanticModel(tree);
//Choose first and last statements
var firstIf = tree.GetRoot().DescendantNodes().OfType<IfStatementSyntax>().First();
var label3 = tree.GetRoot().DescendantNodes().OfType<LabeledStatementSyntax>().Skip(1).Take(1).Single();
ControlFlowAnalysis result = model.AnalyzeControlFlow(firstIf, label3);
Console.WriteLine(result.EntryPoints); //1 – Label 3 is a candidate entry point within these statements
Console.WriteLine(result.ExitPoints); //2 – goto L1 and goto L2 and candidate exit points
In the above example, we see an example of a possible entry point label L3. To the best of my knowledge, labels are the only possible entry points.
Finally, we’ll take a look at answering questions about reachability. In the following, neither the start point or the end point is reachable:
var tree = CSharpSyntaxTree.ParseText(@"
class C
{
void M(int x)
{
return;
if(x == 0) //-+ Start is unreachable
System.Console.WriteLine(""Hello""); // |
L1: //-+ End is unreachable
}
}
");
var Mscorlib = PortableExecutableReference.CreateFromAssembly(typeof(object).Assembly);
var compilation = CSharpCompilation.Create("MyCompilation",
syntaxTrees: new[] { tree }, references: new[] { Mscorlib });
var model = compilation.GetSemanticModel(tree);
//Choose first and last statements
var firstIf = tree.GetRoot().DescendantNodes().OfType<IfStatementSyntax>().Single();
var label1 = tree.GetRoot().DescendantNodes().OfType<LabeledStatementSyntax>().Single();
ControlFlowAnalysis result = model.AnalyzeControlFlow(firstIf, label1);
Console.WriteLine(result.StartPointIsReachable); //False
Console.WriteLine(result.EndPointIsReachable); //False
Overall, the Control Flow API seems a lot more intuitive than the Data Flow Analysis API. It requires less knowledge of the C# specification and is straightforward to work with. At Code Connect, we’ve been using it when rewriting and logging methods. Although it looks like no one has experimented much with this API, I’m really interested to see what uses others will come up with.
3 thoughts on “Learn Roslyn Now: Part 9 Control Flow Analysis
Leave a Reply
Fill in your details below or click an icon to log in:
WordPress.com Logo
You are commenting using your WordPress.com account. Log Out / Change )
Twitter picture
You are commenting using your Twitter account. Log Out / Change )
Facebook photo
You are commenting using your Facebook account. Log Out / Change )
Connecting to %s
|
__label__pos
| 0.982796 |
Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Quis custodiet the custard?
idea: add, search, annotate, link, view, overview, recent, by name, random
meta: news, help, about, links, report a problem
account: browse anonymously, or get an account and write.
user:
pass:
register,
Use the HB to understand Google
Why is the HB #1?
(+1, -1)
[vote for,
against]
Many have noted that pages from the HB are often among the top 5 listed in a Google query. It is not immediately evident to me why this should be so. This is more than a point of curiousity, since the ability to appear at the top of a Google search is very valuable for vendors and others who want traffic.
I propose that the HB could be used to understand the Google search engine. To start, one would set out variables which make a HB page attractive: examples might be white background, few pictures, uniform font. Over the following year these variables would systematically differ in a random manner for new postings. For example, I might post an idea and find that it has a crimson background. Another idea might turn up with multiple images of various things attached, and another might have varying font size. Others would have the standard HB layout.
After a year, one would see how the titles of ideas ranked in Google. One would expect that with enough numbers, title variability would cancel out as a variable and one could see how other aspects of the page layout affected rank.
A faster way to do this would be retrospectively, to see if altering the layout of a page affected its rank in a subsequent google search. The algorithm could sweep through the entire HB every 3 month, with google results determined immediately prior.
bungston, Jun 30 2006
[link]
For the image heavy pages, a random word from the text could be put into google image, and selection #1 used to decorate the page. I input "might" into google image and the first hit was very interesting.
bungston, Jun 30 2006
have you considered it may only be the things you search for that bring HB up? I keep coming up with pretty brunettes.
theircompetitor, Jun 30 2006
Lots of people claim to know how to game google - and try to make a living of it. ("Search engine optimization experts".) I really appreciate not having to give a damn about that.
jutta, Jul 01 2006
[Bungston], you are bad.
zeno, Jul 01 2006
have you seen the poetry made up of the words that turns up most frequently in click-ads? it's pretty funny stuff.
tcarson, Jul 01 2006
[annotate]
back: main index
business computer culture fashion food halfbakery home other product public science sport vehicle
|
__label__pos
| 0.678707 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.